S3 Bucket Uploader
by mstephen190
Upload the items from the default dataset of an actor's run to an S3 bucket in JSON format.
Opens on Apify.com
About S3 Bucket Uploader
Upload the items from the default dataset of an actor's run to an S3 bucket in JSON format.
What does this actor do?
S3 Bucket Uploader is a web scraping and automation tool available on the Apify platform. It's designed to help you extract data and automate tasks efficiently in the cloud.
Key Features
- Cloud-based execution - no local setup required
- Scalable infrastructure for large-scale operations
- API access for integration with your applications
- Built-in proxy rotation and anti-blocking measures
- Scheduled runs and webhooks for automation
How to Use
- Click "Try This Actor" to open it on Apify
- Create a free Apify account if you don't have one
- Configure the input parameters as needed
- Run the actor and download your results
Documentation
Amazon S3 Bucket upload This actor allows you to upload the default dataset of an actor's run to an AWS S3 bucket. It provides easy-to-use tools for injecting information into file names/path names. Additionally, it provides the option of either uploading the entire dataset to one file, or each dataset item as its own file. The Amazon S3 Bucket upload actor is best used within the webhooks of another actor, as to allow for the automatic uploading of its default dataset's items. ## Input Divided into two short sections, this actor's input is intuitive. ### Bucket configuration This section contains details for configuring your bucket. Ensure that you've filled in everything exactly right, especially your credentials. > To learn more about how to get your credentials, check out this page. ### Data configuration In this section, you provide the run ID of the actor, as well as the path for the item and the file name. There are some restrictions for the fileName and pathName inputs as to prevent unnecessary errors: - pathName cannot start with or end with a / symbol. - Neither field can include a period (.) character. (the file extension will be automatically added for you) - fileName can't include any / characters. Within pathName and fileName, you have access to 6 variables: | Variable | Example | Description | Unique for each item | | ------------- | -------------------------------------- | --------------------------------------------------- | -------------------- | | actorName | my-actor | The name of the actor matching the provided run ID. | No | | runId | BC6hdJvyNQStvYLL8 | The run ID of the actor which was provided | No | | date | 2022-05-29 | The date at which the actor finished its run. | No | | now | 1653851198127 | The current time in milliseconds. | Yes | | uuid | b2638dac-00b5-4e29-b698-fe70b6ee6e0b | A totally unique ID. | Yes | | incrementor | 3 | An integer that increments up for every item. | Yes | Variables allow you to easily generate unique file names when writing multiple files (preventing files from being overwritten). now and uuid are great options when you need unique values. Here is an example of some variables being used in the actor's input: JSON { "pathName": "{actorName}/datasets/{date}", "fileName": "{uuid}-item{incrementor}", "separateItems": true } > Notice that you must wrap a variable name in {curlyBraces} for it to work. Here is what the final path for one file might look like with this configuration: text my-actor/datasets/2022-05-29/b2638dac-00b5-4e29-b698-fe70b6ee6e0b-item7.json By default, the actor will write the entire dataset as one file in the S3 bucket. In order to write each dataset item as a separate file in the S3 bucket, set separateItems to true. When you have this option set to true, ensure that you are using at least one unique variable in the fileName, otherwise it will keep writing and overwriting the same file (unless you use unique variables in the pathName, however, that is not recommended).
Categories
Common Use Cases
Market Research
Gather competitive intelligence and market data
Lead Generation
Extract contact information for sales outreach
Price Monitoring
Track competitor pricing and product changes
Content Aggregation
Collect and organize content from multiple sources
Ready to Get Started?
Try S3 Bucket Uploader now on Apify. Free tier available with no credit card required.
Start Free TrialActor Information
- Developer
- mstephen190
- Pricing
- Paid
- Total Runs
- 119,013
- Active Users
- 41
Related Actors
Video Transcript Scraper: Youtube, X, Facebook, Tiktok, etc.
by invideoiq
Linkedin Profile Details Scraper + EMAIL (No Cookies Required)
by apimaestro
Twitter (X.com) Scraper Unlimited: No Limits
by apidojo
Content Checker
by jakubbalada
Apify provides a cloud platform for web scraping, data extraction, and automation. Build and run web scrapers in the cloud.
Learn more about ApifyNeed Professional Help?
Couldn't solve your problem? Hire a verified specialist on Fiverr to get it done quickly and professionally.
Trusted by millions | Money-back guarantee | 24/7 Support