API & Integration

Can It Run DOOM? The 3-Month Challenge & API Madness

James Miller

James Miller

March 10, 2026

11 min read 32 views

Why do developers spend months making DOOM run on impossible platforms through 13 layers of abstraction? This deep dive explores the technical madness, philosophical questions, and practical lessons behind one of programming's strangest traditions.

code, coding, computer, data, developing, development, ethernet, html, programmer, programming, screen, software, technology, work, code, code

Introduction: The Question That Shouldn't Matter

"But can it run DOOM?" It's the programming equivalent of asking if water is wet—a question that's both completely ridiculous and strangely profound. In 2026, we're still asking it about everything from smart refrigerators to enterprise Java applications. The original Reddit post that inspired this article captures the essence perfectly: "What do 13 layers of wildly inefficient abstractions get you that cannot practically (but technically?) get ANY Java code running?"

That post wasn't just memeing. It was asking something real about why we do what we do. Why would anyone spend three months of wall clock time—actual calendar months—making DOOM run on something it was never meant to run on? And more importantly, what does this tell us about modern API integration, abstraction layers, and the strange psychology of developers?

I've been there. I've spent weekends—okay, maybe weeks—trying to make things work that absolutely shouldn't. And you know what? There's method to this madness. Let's unpack it.

The DOOM Porting Tradition: More Than Just a Meme

First, some context for anyone who's new to this particular corner of programming culture. DOOM, the 1993 first-person shooter, has become the "Hello, World!" of system validation. But not because it's simple—because it's complex enough to be interesting while being old enough to be theoretically portable.

The tradition started innocently enough. Someone got DOOM running on a pregnancy test. Then on a printer. Then in Microsoft Excel. Each port was more absurd than the last, and each required increasingly creative workarounds. But here's the thing: these aren't just party tricks. They're stress tests for understanding system boundaries.

When that Reddit poster mentioned "13 layers of wildly inefficient abstractions," they were describing something real. I've seen Java applications where the business logic is buried under so many frameworks, dependency injection containers, and middleware layers that you need a map just to find where the actual work happens. Running DOOM through that stack? That's the ultimate integration test.

Why 3 Months? The Reality of Integration Hell

Let's address the timeline question head-on. Three months sounds absurd for what should be a simple port. But anyone who's worked with enterprise systems in 2026 knows it's not.

Here's a breakdown of where that time actually goes:

  • Month 1: Understanding the existing architecture. You're not just learning the codebase—you're learning why certain decisions were made five years ago by developers who've since moved on. You're documenting dependencies, finding where the abstraction layers actually break, and discovering that the "documentation" is actually just comments that say "TODO: document this."
  • Month 2: Building the integration points. This is where you create those API layers that let DOOM talk to systems it was never meant to interface with. Maybe you're using WebSockets to stream game state to a React frontend. Maybe you're wrapping everything in Docker containers that talk to Kubernetes clusters. Each layer adds complexity.
  • Month 3: Debugging why it almost works. The game loads! The music plays! And then... it crashes when you try to shoot an imp. Now you're digging through logs, tracing execution through those 13 abstraction layers, and discovering that somewhere, someone decided to serialize game state as XML for "compatibility reasons."

This timeline isn't about the difficulty of porting DOOM. It's about the difficulty of working with modern, over-abstracted systems. The game is just the canary in the coal mine.

The 13 Layers: What Are We Actually Building Here?

Let's get specific about those abstraction layers. From what I've seen in real enterprise systems—and from my own questionable experiments—here's what a typical stack might look like when you're trying to do something ridiculous:

Layer 1-3: The Foundation Nobody Understands

technology, computer, code, javascript, developer, programming, programmer, jquery, css, html, website, technology, technology, computer, code, code

You've got legacy code that everyone's afraid to touch. Maybe it's a custom game engine wrapper from 2015 that was built by an intern. Maybe it's a proprietary middleware layer that the company paid six figures for and now can't replace. These layers exist because removing them would be more work than maintaining them, even though nobody remembers how they work.

Layer 4-7: The "Modern" Integration Points

Here's where you add REST APIs, GraphQL endpoints, message queues, and WebSocket servers. Each one promises to make the system more flexible. Each one adds another point of failure. I once saw a system where game state was passed through Kafka, transformed by a Lambda function, stored in Redis, then served through an API Gateway—all for a single-player game.

Layer 8-10: The Monitoring and Observability Stack

Because if you're going to do something stupid, you should at least know when it breaks. You're adding Prometheus metrics, Grafana dashboards, distributed tracing with Jaeger, and log aggregation that costs more per month than the original DOOM game made in its first year.

Need relationship advice?

Better connections on Fiverr

Find Freelancers on Fiverr

Layer 11-13: The Business Justification

The final layers aren't code—they're PowerPoint slides explaining why this is actually a valuable use of company time. "It demonstrates our platform's flexibility!" "It's a proof of concept for future gaming integrations!" "It's... well, it's team building!"

The Real Question: Why Bother at All?

coding, programming, css, software development, computer, close up, laptop, data, display, electronics, keyboard, screen, technology, app, program

Back to the Reddit post's central question: "Why would I waste my time doing something that nobody realistically needs or wants?"

I've got three answers, and they're all true:

First, it's the ultimate learning exercise. When you try to make DOOM run on a system it wasn't designed for, you learn more about that system than you would from any documentation or training. You find the edge cases. You discover the undocumented limitations. You understand how the pieces actually fit together—or don't.

Second, it's stress testing by other means. In 2026, we have all these fancy testing frameworks and CI/CD pipelines. But sometimes, the best way to find out if your system can handle weird edge cases is to... give it weird edge cases. If your API gateway can route DOOM game state while maintaining authentication and rate limiting, it can probably handle whatever your actual business logic throws at it.

Third, and this is the secret reason: it's fun. Programming has become so serious. So corporate. So focused on KPIs and ROI and sprint velocities. Making DOOM run on something impossible is pure, unadulterated technical joy. It's remembering why we got into this field in the first place.

Practical Lessons for Actual API Integration

Okay, so you're probably not going to actually port DOOM to your enterprise Java application. But the exercise teaches real lessons that apply to everyday API work:

Lesson 1: Every abstraction has a cost. Those 13 layers? Each one adds latency. Each one can fail. Each one needs to be understood, maintained, and debugged. Before you add another microservice or middleware layer, ask: is this solving a real problem, or just creating the illusion of architecture?

Lesson 2: Integration points are where systems break. DOOM doesn't crash in the middle of rendering a frame. It crashes when it tries to save a game, or load a sound, or talk to the network. Your systems are the same. The actual business logic is usually solid. It's the integration between components—the API calls, the database connections, the message queues—where things go wrong.

Lesson 3: Sometimes the hard way is the right way. That Reddit poster was frustrated by inefficient abstractions. But here's the thing: sometimes those abstractions exist for good reasons. Maybe that XML serialization is there because you need to integrate with a legacy system that only speaks SOAP. Maybe those 13 layers exist because different teams own different parts of the system, and this is the cleanest interface between them.

If you're dealing with complex web scraping or automation tasks as part of your integration work—say, pulling game assets or documentation—tools like Apify can handle the infrastructure headaches. But the architectural decisions? Those are still yours to make.

The 2026 Landscape: What's Changed and What Hasn't

We're three years out from when I'm writing this. Some things have changed in API integration:

AI-assisted development is everywhere, but it hasn't solved the fundamental problems. ChatGPT might generate the code to interface DOOM with your Kafka cluster, but it won't tell you if that's a good idea. It won't understand your company's specific constraints. And it definitely won't debug the race conditions that emerge when 10,000 demons are trying to pathfind through your message queue.

Featured Apify Actor

TikTok Scraper

Need to pull data from TikTok for research, marketing, or a cool project? This TikTok Scraper is what I use. It lets you...

57.2M runs 104.2K users
Try This Actor

Serverless has matured, but the abstraction costs are still real. Lambda functions are great until you realize your DOOM port is spending 90% of its time in cold starts. And good luck debugging distributed tracing across 50 different function invocations.

Edge computing has made things faster, but more complex. Now your DOOM port might be running across 20 different edge locations, with game state synchronized in real-time. The latency is better, but the consistency problems are... creative.

Through all this, the core challenge remains: how do you build systems that are both flexible enough to handle unexpected requirements (like, say, running a 30-year-old game) and maintainable enough that you don't need three months to add a simple feature?

Common Mistakes (Or: How to Waste 3 Months Efficiently)

If you're going to attempt something this ridiculous—whether it's porting DOOM or just integrating two systems that really shouldn't talk to each other—here's how to avoid the worst pitfalls:

Mistake 1: Underestimating state management. DOOM has game state. Your enterprise application has user sessions, database transactions, and cache consistency. When you're building integration layers, you need to understand how state flows through the system. Where is it stored? Who owns it? What happens when it needs to change?

Mistake 2: Assuming the documentation is accurate. It's not. It's never accurate. The Reddit post mentioned something being "offhandedly mentioned by a stranger in a reddit thread." Sometimes that's your best source of truth. The actual behavior of the system is what matters, not what the documentation says it should do.

Mistake 3: Not knowing when to stop. This is the big one. You can always add another abstraction layer. You can always make the system more "flexible." But at some point, you need to ship. The perfect is the enemy of the good, and in integration work, the over-architected is the enemy of the working.

If you find yourself in over your head, sometimes the smartest move is to bring in an expert who's done this kind of integration before. Three months of your time might be more expensive than you think.

Conclusion: The Value in Seemingly Worthless Work

So, can it run DOOM? More importantly, should you spend three months making it run DOOM?

The Reddit poster had it right when they asked, "Why do we go to the moon?" We do things not because they're easy, but because they're hard. Not because they're practical, but because they expand what we believe is possible.

In 2026, with all our advanced tools and frameworks and AI assistants, we still need these exercises. We need to remember that beneath all the abstraction layers and API specifications and architecture diagrams, we're just making computers do things. Sometimes those things are processing insurance claims. Sometimes they're rendering demons from Hell.

The next time you're faced with an integration challenge that seems absurd—whether it's making DOOM run on your Java EE application or connecting two systems that have no business talking to each other—remember: the value isn't just in whether it works. It's in what you learn along the way. It's in understanding your systems at a deeper level. It's in remembering that programming, at its heart, is still about making impossible things possible.

Even if those impossible things involve far too many abstraction layers and three months of your life.

James Miller

James Miller

Cybersecurity researcher covering VPNs, proxies, and online privacy.