Cloud & Hosting

Raspberry Pi API Gateway: Handling 2M Monthly Requests in a Closet

David Park

David Park

January 31, 2026

14 min read 45 views

A Reddit post about a Raspberry Pi 4 handling 2 million API requests monthly from a closet sparked a revolution in self-hosted infrastructure. This guide explores why it works, how to build it, and when you actually need cloud services.

code, coding, computer, data, developing, development, ethernet, html, programmer, programming, screen, software, technology, work, code, code

The Closet That Runs a Company: Why This Story Resonated

When that Reddit post hit r/selfhosted in late 2025, it wasn't just another "look at my homelab" flex. It was a manifesto. A developer, tired of the "we need AWS for everything" mentality at work, set up a Raspberry Pi 4 as an API gateway to prove a point. Eight months later, that little $75 computer was routing all their internal APIs, handling authentication, enforcing rate limits—the whole shebang—processing about 2 million requests monthly. And it lived in a closet next to Christmas decorations.

The post got nearly 1,700 upvotes and 182 comments because it tapped into something real: cloud fatigue. Not every application needs to be a distributed, multi-region, auto-scaling behemoth. Sometimes, you just need something that works reliably, costs almost nothing to run, and doesn't lock you into vendor ecosystems. The comment section exploded with questions: "What software are you using?" "How do you handle SSL?" "What about power outages?" "Seriously, no cooling issues?"

What made this story compelling wasn't just the technical achievement—it was the human element. The panic when the power went out. The wife asking about the closet. The sheer absurdity of enterprise infrastructure coexisting with holiday storage. This wasn't theory; this was someone's actual production setup, serving real business needs while costing less per year than a single AWS reserved instance.

Breaking Down the Numbers: 2 Million Requests on Raspberry Pi Hardware

Let's get specific about what 2 million monthly requests actually means for a Raspberry Pi 4. The model matters here—the Pi 4 with 4GB or 8GB RAM is what we're talking about, not the earlier versions. At 2 million requests per month, we're looking at roughly 66,000 requests per day, or about 2,750 requests per hour if traffic were perfectly distributed (which it never is).

Peak loads matter more. If this serves a typical 8-hour workday, you might see 8,000 requests per hour during busy periods. That's about 2.2 requests per second sustained. The Raspberry Pi 4's quad-core ARM Cortex-A72 processor at 1.5GHz can handle this easily—I've stress-tested similar setups that handle 50+ requests per second before showing strain. The bottleneck isn't the CPU for API gateway work; it's network I/O and memory.

Where people get skeptical is the "in my closet" part. Thermal throttling is real. The Pi 4 can get hot under sustained load, especially without proper airflow. But here's the thing: an API gateway isn't doing heavy computation. It's routing requests, checking tokens, maybe transforming some JSON. It's I/O bound, not CPU bound. With a simple Argon One M.2 Case or even the official Pi case with a small fan, temperatures stay manageable. The original poster mentioned their setup had been running for 8 months—if thermal throttling was an issue, they'd have noticed performance degradation long before.

The real magic is in the efficiency. At idle, a Pi 4 draws about 3-4 watts. Under load, maybe 6-7 watts. Compare that to even the most efficient cloud instances, which consume orders of magnitude more power just at idle. Your annual electricity cost for this setup? About $5-10. You can't even get a t4g.nano AWS instance for an hour with that budget.

The Software Stack: What Actually Runs on That Pi

The Reddit comments were desperate to know: what software makes this work? The poster never specified in the original, but based on common patterns in the self-hosted community and what makes sense for this workload, we can reconstruct the likely stack.

For the API gateway itself, three contenders dominate: NGINX, Traefik, and Kong. NGINX is the battle-tested veteran—incredibly stable, relatively lightweight, and with modules for almost everything. Traefik is the modern favorite, especially in Docker environments, with automatic SSL via Let's Encrypt and great Kubernetes integration. Kong is more feature-rich but heavier, built on NGINX but with added API management features.

My money's on either NGINX with some custom Lua scripts or Traefik. Why? Because they're simple to configure for basic routing, authentication, and rate limiting. The poster mentioned "handling auth"—this could be as simple as JWT validation with a shared secret, or integration with something like Keycloak or Authelia running on another Pi. Rate limiting is native in both NGINX and Traefik.

For management, NGINX Proxy Manager (a web UI for NGINX) is a strong possibility. It gives you a nice dashboard to manage proxies, SSL certificates, and access lists without editing config files. Perfect for when you want something stable that "just works" without constant tinkering.

The operating system is almost certainly Raspberry Pi OS Lite (64-bit). No desktop environment, minimal services running. Docker might be in the picture, especially if using Traefik, but a bare-metal NGINX install is even lighter. For monitoring, something like Netdata or Prometheus Node Exporter gives visibility without significant overhead.

When This Works Brilliantly (And When It Doesn't)

technology, computer, code, javascript, developer, programming, programmer, jquery, css, html, website, technology, technology, computer, code, code

Here's where we need to be honest—not every workload belongs on a Pi in a closet. The original post specified "internal APIs." That's crucial. This isn't serving public traffic from your residential internet connection (though some brave souls do that too).

This setup shines for:

  • Internal microservices communication: When your backend services need to talk to each other within a private network
  • Development and staging environments: Perfect for teams that want reproducible environments without cloud bills
  • Small business internal tools: HR systems, inventory management, internal dashboards
  • IoT aggregation: Collecting data from devices and providing a unified API
  • Learning and prototyping: Understanding API gateway concepts without financial risk

Where it falls short:

  • Public-facing high-traffic APIs: Your residential internet upload speed and reliability become bottlenecks
  • Regulated industries with specific compliance requirements: HIPAA, PCI DSS, etc., require certified infrastructure
  • Global low-latency requirements: A single Pi in Ohio can't serve Tokyo users with sub-100ms latency
  • Stateful applications needing 99.99% uptime: No redundant power, single point of failure

The sweet spot? Companies with 5-50 employees, all in one or two locations, needing internal tools. Or development teams tired of surprise AWS bills. The original poster proved this can work in production—but "production" has different meanings for different businesses.

Need website speed optimization?

Make your site lightning fast on Fiverr

Find Freelancers on Fiverr

The Power Problem: From Panic to Preparedness

"Power went out last month and my wife asked why I was panicking about the closet." This line got more laughs than anything else in the original post. But it highlights the single biggest weakness of this approach: reliability.

A Pi has no battery backup. Your residential power probably isn't on a UPS (though it should be for critical equipment). When the power flickers, everything goes down. For internal tools used during work hours, this might be acceptable—if the office loses power, people aren't working anyway. But for anything that needs to be available 24/7, you need a plan.

The simplest solution: a UPS Battery Backup. A small 600VA unit can keep a Pi running for hours. Pair it with a script that gracefully shuts down the Pi when battery gets low, and you've solved the sudden power loss problem. Total cost: about $80.

But what about internet outages? If your Pi is routing internal APIs, it might not need external internet to function. Internal DNS and service discovery can keep working. The authentication piece gets tricky if you're using cloud OAuth providers, but for internal JWT tokens, everything stays functional.

The real concern is data integrity. Is the Pi writing logs or metrics somewhere? Is there a database involved? For a pure API gateway that's mostly stateless (routes, SSL certs, rate limit counters), even a sudden power loss isn't catastrophic. The configuration is in source control, and the Pi reboots to a known state. This is actually more resilient than you'd think—cloud VMs can terminate unexpectedly too.

The key is expectations. If your company can tolerate an hour of downtime during a rare power outage, this works. If you need five-nines uptime, you need redundant everything—and at that point, yes, you probably need cloud infrastructure or a proper colocation setup.

Building Your Own: A Practical 2026 Guide

Want to try this yourself? Here's how I'd set it up today, incorporating what we've learned since that original 2025 post.

Start with hardware: Raspberry Pi 4 8GB (the extra RAM is cheap insurance), a good quality power supply (not the cheap ones), and a case with active cooling. The Argon One cases are excellent. Add a microSD card (get the high endurance version) or better yet, use USB boot from an SSD for faster IO and better reliability.

For software, I'm leaning toward Traefik in 2026. Why? The automatic SSL management has gotten even better, the Docker integration is seamless if you're containerizing your services, and the configuration is declarative. Install Docker and Docker Compose, then set up a docker-compose.yml with Traefik and whatever services you're proxying.

Authentication? Use Traefik's forwardAuth middleware pointing to a simple Go or Python service that validates JWT tokens. Or if you need full OAuth, set up Authelia(https://www.authelia.com) as your internal SSO. It's lighter than Keycloak and perfect for internal tools.

Monitoring is non-negotiable. Install Prometheus Node Exporter for system metrics, and set up Traefik to export metrics too. Use Grafana on another Pi (or the same one if resources allow) to create dashboards. Set up alerting when CPU stays above 80% for 5 minutes, or when memory usage gets high.

Backup your configuration daily to a cloud storage provider or another machine. The entire setup should be reproducible from a git repository. If your Pi dies, you should be able to buy a new one, flash the OS, clone your repo, and be back in business in under an hour.

Common Pitfalls and Questions from the Comments

raspberries, yellow raspberries, red raspberries, fruits, food, berries, dessert, vitamins, healthy, aiselu, golden evergreen raspberry

The Reddit comments revealed what people really worry about. Let's address the top concerns:

"Won't the SD card wear out?" Yes, if you're doing heavy writes. Solution: Use an SSD via USB 3.0, or configure logging to write to a RAM disk and flush periodically. Or get a high-endurance industrial SD card.

"What about security?" A Pi behind your firewall is more secure than a cloud instance with a public IP. Keep the OS updated, don't expose unnecessary ports, use fail2ban. For internal APIs, this is often more secure than cloud solutions because the attack surface is smaller.

"How do you handle updates without downtime?" With Traefik or NGINX, you can do zero-downtime reloads. For OS updates, schedule them during maintenance windows. Or better yet, have two Pis in an active-passive configuration with a floating IP.

Featured Apify Actor

Google News Scraper

Need to track news stories as they break? This Google News scraper pulls the top featured articles directly from Google'...

2.8M runs 2.4K users
Try This Actor

"What if I need to scale beyond one Pi?" This is where things get interesting. You can run multiple Pi gateways behind a load balancer. Or use DNS round-robin. For internal APIs, you might not need to scale horizontally—vertical scaling (a more powerful SBC like an Orange Pi 5) might be enough.

"Isn't this just moving the problem?" Some comments argued that you're still maintaining infrastructure, just different infrastructure. True. But you're maintaining simple infrastructure with predictable costs. There's value in that.

The Bigger Picture: What This Means for Cloud Strategy

The original post wasn't really about Raspberry Pis. It was about questioning defaults. When your coworker says "we need AWS for everything," they're expressing a bias, not stating a fact.

In 2026, the conversation has matured. It's not "cloud vs. on-prem" anymore. It's "right tool for the job." Some workloads belong in hyperscale clouds. Some belong at the edge. Some belong on a Pi in a closet. The smart approach is to evaluate based on actual requirements, not industry trends.

Consider hybrid approaches: Your public-facing APIs in AWS, your internal APIs on Pis. Your customer data in Azure, your development environments locally. The Pi gateway could even proxy requests to cloud services when needed, giving you a unified internal API regardless of where backends actually live.

And here's something people don't talk about enough: skill development. Maintaining a Pi gateway teaches you about networking, security, monitoring, and reliability in ways that clicking through AWS consoles never will. That knowledge makes you better at designing cloud systems too.

If you're managing complex data pipelines or web scraping tasks that feed your APIs, services like Apify handle the infrastructure-heavy parts while your Pi handles the internal routing. It's about choosing your battles.

Getting Started Without Betting the Company

You're convinced this might work for some of your workloads. How do you start without risking production systems?

First, replicate your simplest internal API. Maybe it's an employee directory or a meeting room booking system. Set up a Pi with Traefik pointing to that service. Test it with a few users. Monitor performance for a month.

Document everything. Calculate the actual costs (hardware, electricity, your time). Compare to what you're paying in the cloud. Be honest about the trade-offs.

If you need specific expertise you don't have in-house, you can find specialists on Fiverr who can help set up the initial configuration or troubleshoot performance issues. Sometimes paying for a few hours of expert help saves weeks of trial and error.

The goal isn't to replace all cloud spending. It's to make intentional choices. Maybe you save 80% of your internal tools budget. Maybe you discover that some things really do work better in the cloud. Either way, you've learned something valuable.

Beyond the Closet: What's Next for Edge Computing

That Pi in the closet is part of a bigger trend: the democratization of infrastructure. As single-board computers get more powerful (the Raspberry Pi 5 is already here, with even more capable alternatives emerging), what's possible at the edge expands.

Imagine department-level API gateways. Each team manages their own Pi for their services, with centralized monitoring. Or Pi clusters for redundancy. Or geographically distributed Pis for companies with multiple offices, syncing configuration but handling local traffic locally.

The tools are getting better too. Kubernetes on ARM is mature. Management platforms like Portainer make container administration trivial. The barrier to entry keeps dropping.

But the core insight remains: infrastructure should serve the business, not the other way around. Whether that's a global AWS deployment or a Pi in a closet depends on what your business actually needs. The original poster proved that sometimes, the simpler solution isn't just cheaper—it's better.

So look at your cloud bill. Look at your internal services. Ask yourself: which of these could live on a Pi? You might be surprised by the answer. And if you try it, maybe you'll be the one posting the success story in 2027.

David Park

David Park

Full-stack developer sharing insights on the latest tech trends and tools.