NEPSE Live

by woundless_insurance

Get structured, real-time data from the Nepal Stock Exchange (NEPSE) automatically. Perfect for developers, analysts, and investors building dashboards or automated trading systems.

121 runs
13 users
Try This Actor

Opens on Apify.com

About NEPSE Live

Need to track the Nepal Stock Exchange (NEPSE) in real-time without constantly refreshing a browser? I built this actor to solve exactly that. It pulls live NEPSE data—current prices, indices, and market movements—directly from the source and delivers it in a clean, structured format like JSON or CSV. Forget manual copying; this automates the entire data collection process, letting you focus on analysis instead of data entry. I use it primarily for two things: building live dashboards and feeding automated trading models. Whether you're a developer creating a financial app, an analyst monitoring daily fluctuations, or an investor wanting instant price alerts, this tool gets you the raw market data you need. It runs reliably on Apify's infrastructure, so you can schedule it to run at intervals or trigger it from your own systems. It’s basically a dedicated pipeline for NEPSE live prices, saving you hours of manual work and ensuring you never miss a market shift because you were stuck manually checking a website.

What does this actor do?

NEPSE Live is a web scraping and automation tool available on the Apify platform. It's designed to help you extract data and automate tasks efficiently in the cloud.

Key Features

  • Cloud-based execution - no local setup required
  • Scalable infrastructure for large-scale operations
  • API access for integration with your applications
  • Built-in proxy rotation and anti-blocking measures
  • Scheduled runs and webhooks for automation

How to Use

  1. Click "Try This Actor" to open it on Apify
  2. Create a free Apify account if you don't have one
  3. Configure the input parameters as needed
  4. Run the actor and download your results

Documentation

NEPSE Live Actor

An Apify actor built on the PuppeteerCrawler template for automated web scraping. It's designed to crawl JavaScript-rendered websites, making it suitable for extracting live data from dynamic pages like the Nepal Stock Exchange (NEPSE).

Overview

This actor uses Crawlee's PuppeteerCrawler to run headless Chrome browsers, allowing it to interact with and extract data from modern web applications. It includes a structured input schema, proxy rotation support, and organized request handling via a custom router.

Key Features

  • Puppeteer Crawler: Parallel crawling using headless Chrome to handle JavaScript-heavy sites.
  • Proxy Configuration: Integrated support for Apify Proxy or custom proxies to manage IP blocking, configured via Actor.createProxyConfiguration().
  • Input Schema: Validates run configuration and start URLs through a defined JSON schema.
  • Structured Output: Saves scraped data into an Apify Dataset for easy export and integration.
  • Custom Routing: Uses a routes.js file to define specific handlers for different URL patterns.

How to Use

The actor's workflow is straightforward:

  1. Input: Provide your configuration (like start URLs) via the INPUT.json file or the Apify console.
  2. Setup: The actor initializes a proxy configuration and creates a PuppeteerCrawler instance.
  3. Handling: Page requests are processed by a custom router (routes.js). You define handlers for specific page types (e.g., a 'detail' handler for product pages).
  4. Execution: The crawler runs through the provided URLs, executing JavaScript and extracting data as defined in your handlers.
  5. Output: Results are pushed to an Apify Dataset.

A basic handler in routes.js looks like this:

router.addHandler('detail', async ({ request, page, log }) => {
    const title = await page.title();
    await Dataset.pushData({
        url: request.loadedUrl,
        title,
    });
});

Input/Output

Input:
The actor expects a JSON input containing startUrls (an array of URLs to crawl). You can extend the input schema to include custom parameters like maximum pages or specific selectors.

Output:
Data is stored in an Apify Dataset. Each item's structure depends on your custom handlers, typically containing the URL and extracted data points (like titles, prices, or live figures).

Resources

Common Use Cases

Market Research

Gather competitive intelligence and market data

Lead Generation

Extract contact information for sales outreach

Price Monitoring

Track competitor pricing and product changes

Content Aggregation

Collect and organize content from multiple sources

Ready to Get Started?

Try NEPSE Live now on Apify. Free tier available with no credit card required.

Start Free Trial

Actor Information

Developer
woundless_insurance
Pricing
Paid
Total Runs
121
Active Users
13
Apify Platform

Apify provides a cloud platform for web scraping, data extraction, and automation. Build and run web scrapers in the cloud.

Learn more about Apify

Need Professional Help?

Couldn't solve your problem? Hire a verified specialist on Fiverr to get it done quickly and professionally.

Find a Specialist

Trusted by millions | Money-back guarantee | 24/7 Support