Output to Dataset
by njoylab
Merges outputs from multiple actors into a single dataset. Execute actors in series or parallel, combine data from datasets, key-value stores, webhook...
Opens on Apify.com
About Output to Dataset
Merges outputs from multiple actors into a single dataset. Execute actors in series or parallel, combine data from datasets, key-value stores, webhooks, and export the final output in various formats.
What does this actor do?
Output to Dataset is a web scraping and automation tool available on the Apify platform. It's designed to help you extract data and automate tasks efficiently in the cloud.
Key Features
- Cloud-based execution - no local setup required
- Scalable infrastructure for large-scale operations
- API access for integration with your applications
- Built-in proxy rotation and anti-blocking measures
- Scheduled runs and webhooks for automation
How to Use
- Click "Try This Actor" to open it on Apify
- Create a free Apify account if you don't have one
- Configure the input parameters as needed
- Run the actor and download your results
Documentation
Output to Dataset Apify actor that merges outputs from multiple actors into a single dataset. Execute actors in series or parallel, combine data from datasets, key-value stores, and webhooks, and get ready-made JSON/CSV/XLSX download links straight from the run logs. ## Features - Multiple Data Sources: Fetch data from: - Existing datasets - Key-value stores - Actor runs - Webhook URLs - Actor Execution: Run multiple actors and collect their outputs - Parallel execution: Run all actors simultaneously for faster results - Series execution: Run actors one after another - Merge Strategies: - Append: Combine all data (keeps duplicates) - Deduplicate: Remove duplicates based on specified fields - Data Transformations: Filter, remap, pick, or enrich records before merging - Instant Downloads: Every run logs the dataset console link plus JSON, CSV, and XLSX download URLs powered by Apify dataset exports ## Input Configuration ### Sources Array of existing data sources to merge: json { "sources": [ { "type": "dataset", "id": "datasetId123" }, { "type": "keyValueStore", "id": "storeId456", "key": "OUTPUT" }, { "type": "actorRun", "id": "runId789" }, { "type": "webhook", "id": "https://api.example.com/data" } ] } ### Actor Runs Array of actors to execute before merging: json { "actorRuns": [ { "actorId": "apify/web-scraper", "input": { "startUrls": [{"url": "https://example.com"}] }, "outputType": "dataset" }, { "actorId": "apify/google-search-scraper", "input": { "queries": "apify" }, "outputType": "keyValueStore", "outputKey": "OUTPUT" } ] } Use outputType to control where each run stores its data before merging: - dataset (default) – read the items that the actor pushed to its default dataset; no outputKey needed. - keyValueStore – read a file/record saved via Actor.setValue in the default key-value store; set outputKey to the record name (e.g., "MERGED_OUTPUT.json"). Example mixing both sinks: json { "actorRuns": [ { "actorId": "my-dataset-actor", "outputType": "dataset" }, { "actorId": "my-exporting-actor", "outputType": "keyValueStore", "outputKey": "LATEST_EXPORT" } ] } ### Execution Mode - parallel (default): Run all actors at the same time - series: Run actors one after another ### Merge Strategy - append (default): Combine all items, keeping duplicates - deduplicate: Remove duplicate items based on specified fields json { "mergeStrategy": "deduplicate", "deduplicateBy": ["url", "title"] } ### Output Location All merged records are pushed to the actor's default dataset. Use Apify Console exports (JSON, CSV, XLSX, etc.) when you need a specific download format. ### Transformations Apply zero or more transformations to each item before the merge step. Transformations run in the order provided. json { "transformations": [ { "type": "filter", "field": "price", "operator": "lessThan", "value": 50 }, { "type": "mapFields", "mapping": { "title": "product.name", "price": "product.price" }, "removeOriginal": true }, { "type": "pickFields", "fields": ["product.name", "product.price", "url"] }, { "type": "setField", "field": "currency", "value": "USD", "overwrite": false } ] } Supported transformation types: - filter: keep only items whose field matches a condition (equals, notEquals, contains, greaterThan, lessThan, exists). - mapFields: copy data from one field path to another (with optional removal of the original field). - pickFields: keep only the listed field paths (missing values are kept unless dropUndefined is true). - setField: write a static value into a field, optionally skipping existing values unless overwrite is true. ## Complete Example json { "actorRuns": [ { "actorId": "apify/web-scraper", "input": { "startUrls": [ {"url": "https://apify.com/store"} ], "pageFunction": "async function pageFunction(context) { return context.request; }" } }, { "actorId": "apify/google-search-scraper", "input": { "queries": "web scraping" } } ], "sources": [ { "type": "dataset", "id": "existingDatasetId" } ], "executionMode": "parallel", "mergeStrategy": "deduplicate", "deduplicateBy": ["url"] } ## Output The actor saves all merged data to its default dataset. You can: 1. Access via Apify Console: View the dataset in the run's output tab 2. Download: Export the dataset in any format from the Apify platform 3. Follow the logs: After each run the actor prints both a console link and ready-to-use JSON/CSV/XLSX download URLs for the merged dataset. ## Use Cases ### 1. Merge Multiple Scraping Runs Run the same scraper with different inputs and merge results: json { "actorRuns": [ { "actorId": "my-scraper", "input": {"category": "electronics"} }, { "actorId": "my-scraper", "input": {"category": "books"} }, { "actorId": "my-scraper", "input": {"category": "clothing"} } ], "executionMode": "parallel", "mergeStrategy": "append" } ### 2. Combine Historical Data Merge data from multiple previous runs: json { "sources": [ {"type": "actorRun", "id": "run1"}, {"type": "actorRun", "id": "run2"}, {"type": "actorRun", "id": "run3"} ], "mergeStrategy": "deduplicate", "deduplicateBy": ["id"] } ### 3. Aggregate Multiple Datasets Combine existing datasets into one: json { "sources": [ {"type": "dataset", "id": "dataset1"}, {"type": "dataset", "id": "dataset2"}, {"type": "dataset", "id": "dataset3"} ] }
Categories
Common Use Cases
Market Research
Gather competitive intelligence and market data
Lead Generation
Extract contact information for sales outreach
Price Monitoring
Track competitor pricing and product changes
Content Aggregation
Collect and organize content from multiple sources
Ready to Get Started?
Try Output to Dataset now on Apify. Free tier available with no credit card required.
Start Free TrialActor Information
- Developer
- njoylab
- Pricing
- Paid
- Total Runs
- 40
- Active Users
- 5
Related Actors
Tecdoc Car Parts
by making-data-meaningful
OpenRouter - Unified LLM Interface for ChatGPT, Claude, Gemini
by xyzzy
Google Sheets Import & Export
by lukaskrivka
Send Email
by apify
Apify provides a cloud platform for web scraping, data extraction, and automation. Build and run web scrapers in the cloud.
Learn more about ApifyNeed Professional Help?
Couldn't solve your problem? Hire a verified specialist on Fiverr to get it done quickly and professionally.
Trusted by millions | Money-back guarantee | 24/7 Support