Tech Tutorials

Waymo's Blackout Breakdown: What SF's Traffic Jam Reveals About AVs

Emma Wilson

Emma Wilson

December 23, 2025

12 min read 12 views

When a blackout hit San Francisco, Waymo's driverless cars didn't just stop—they created gridlock. This incident exposes critical vulnerabilities in autonomous vehicle technology and raises urgent questions about real-world readiness.

vehicle, autonomous, autonomous driving, automatically, electric, automobile, self-driving, transport, charite, hospital, berlin, autonomous

The Night San Francisco's Future Stood Still

Picture this: It's a Tuesday evening in San Francisco, 2025. The lights flicker, then die. A localized blackout hits several downtown blocks. For human drivers, it's inconvenient but manageable—you slow down, you're extra cautious, maybe you pull over. But for Waymo's fleet of driverless cars? It was a complete system failure that didn't just stop the vehicles—it created multiple traffic jams that took hours to untangle.

This wasn't a hypothetical scenario. It actually happened. And the fallout was immediate: Waymo voluntarily suspended its San Francisco service while investigating what went wrong. But here's what really matters: This incident isn't just about one company's bad night. It's a stark revelation about the current state of autonomous vehicle technology and its readiness for the messy, unpredictable reality of our cities.

In this deep dive, we'll unpack exactly what happened, why it matters more than you might think, and what it reveals about the gap between laboratory-perfect AI and real-world chaos. If you're wondering whether self-driving cars are truly ready for prime time, this incident provides some uncomfortable answers.

What Actually Happened That Night?

Let's reconstruct the sequence based on reports and the community discussion. Around 7:30 PM, a power substation failure plunged several blocks near the Financial District into darkness. Traffic signals went dark. Streetlights died. For Waymo's vehicles, this triggered what multiple Reddit commenters described as a "fail-safe cascade."

The cars didn't just pull over safely to the curb. According to eyewitness accounts, they entered what one user called "zombie mode"—stopping in travel lanes, blocking intersections, and refusing to move even when manually directed by first responders. One particularly vivid description mentioned a Waymo vehicle "parked diagonally across two lanes, hazards flashing, completely immobile while traffic backed up for blocks."

What's fascinating—and concerning—is how the community immediately identified the likely failure points. Multiple commenters with technical backgrounds pointed out that Waymo's vehicles likely rely heavily on pre-mapped data and constant connectivity. When the blackout disrupted cellular networks and made the environment unrecognizable compared to their high-definition maps, the vehicles essentially lost their "mental map" of the world.

"It's like they're brilliant students who've memorized the textbook perfectly," one Redditor analogized, "but when the professor asks an unexpected question, they just freeze up." This gets to the heart of the issue: These systems are optimized for normal conditions, not edge cases. And as we'll see, blackouts aren't even that rare of an edge case in many cities.

The Core Problem: Over-Reliance on Digital Perfection

company, business, worker, autonomous, desk, laptop, smile, girl, woman, business, worker, worker, worker, worker, worker, laptop

Here's where things get technically interesting. From what the community dissection revealed, Waymo's approach—and indeed, most AV systems—depends on what we might call "digital environmental certainty." They need clean sensor data, reliable connectivity, and conditions that match their training data. A blackout violates all three.

First, the sensor issue. Most AVs use LiDAR, cameras, and radar. In complete darkness, cameras become nearly useless. LiDAR still works, but it's interpreting a world without expected visual cues—no traffic light colors, no illuminated signs, no brake lights from other cars. The system is suddenly working with partial, unfamiliar data.

Second, connectivity. While the vehicles don't need constant cloud connection to operate, they do rely on it for updates, coordination, and certain fallback protocols. When cellular networks got overloaded during the blackout (as they always do during emergencies), that safety net disappeared.

But the third issue is the most fundamental: These systems are trained on millions of miles of data, but almost certainly not enough miles of "complete infrastructure failure" scenarios. As one commenter put it: "They've trained for rain, snow, and fog. But have they trained for 'everything electronic stops working simultaneously'? Probably not."

This exposes what I consider the central tension in AV development: The cleaner and more controlled the testing environment, the less prepared the system is for real-world messiness. And cities are nothing if not messy.

Why This Matters More Than Other AV Failures

You might be thinking: "Okay, so some cars stopped. Big deal." But the community reaction highlighted why this incident is particularly significant. It's not just about the immediate inconvenience—it's about systemic vulnerability.

Several commenters made the crucial point that blackouts often coincide with emergencies. What if this had happened during an earthquake evacuation? Or a fire? Or a medical emergency where streets need to stay clear for first responders? The vehicles didn't just fail privately—they failed in a way that actively impeded human response.

Looking for a WordPress developer?

Find expert WordPress developers on Fiverr

Find Freelancers on Fiverr

One firefighter who commented put it bluntly: "We train for blackouts. Our trucks have procedures. If AVs become common and they all freeze during outages, they become movable roadblocks during exactly the situations when we need clear roads most."

This gets to what I call the "public infrastructure responsibility" question. When human drivers make errors, they're distributed and individual. When an AV system fails, it fails identically across an entire fleet. That creates systemic risk rather than individual risk. During the blackout, it wasn't one car stopping oddly—it was potentially dozens following the same flawed protocol.

There's also the trust dimension. Many commenters noted that they'd been cautiously optimistic about AVs until this incident. "I could accept occasional fender-benders," one wrote. "But gridlocking a city during an emergency? That shows a fundamental lack of robustness."

The Human Factor: What Drivers Do That AVs Don't

bmw, car, vehicle, car wallpapers, sport car, electronic, tutorial, technology, design

Reading through the experiences shared by human drivers during the same blackout was illuminating. While the AVs froze, humans adapted using what one commenter perfectly described as "social driving intelligence."

Humans treated the dark intersections as four-way stops—a standard procedure taught in driver's ed. They made eye contact with other drivers. They used hand signals. They proceeded cautiously based on situational awareness rather than rigid rules. Most importantly, they kept traffic flowing, however slowly.

The AVs, by contrast, appeared to lack any protocol for "degraded but functional" operation. They seemed to have two modes: normal operation and complete shutdown. What was missing was the human ability to operate in between—to drive more carefully, more slowly, with less certainty, but to keep moving.

This highlights what I believe is the next frontier in AV development: graceful degradation. Can these systems be programmed not just for optimal conditions, but for suboptimal ones? Can they recognize when they're in a "reduced capability" mode and adjust their behavior accordingly?

One software engineer in the discussion made an apt comparison to internet protocols: "TCP handles packet loss by slowing down, not stopping entirely. AVs need similar degradation protocols—when sensor input degrades, slow down but keep moving safely."

Technical Solutions and Workarounds

So what could Waymo—and other AV companies—do differently? Based on the technical discussion and my own analysis, here are several approaches that could prevent a repeat.

First, better offline capabilities. The vehicles need more robust onboard decision-making that doesn't depend on cloud connectivity. This means more sophisticated local processing and fallback maps. Interestingly, some commenters suggested using Raspberry Pi 5 or similar edge computing devices as test platforms for developing these offline protocols—though obviously production vehicles would need industrial-grade hardware.

Second, "dark mode" training. AV systems need to be explicitly trained for infrastructure failure scenarios. This doesn't just mean driving in the dark—it means driving when traffic signals are dead, when streetlights are out, when other vehicles are behaving unpredictably. This training data is hard to collect naturally, which is where simulation becomes crucial.

Third, human-in-the-loop fallbacks. Several commenters reasonably asked: "Why couldn't remote operators take control?" The answer, apparently, was scale and connectivity. With many vehicles affected simultaneously and cellular networks overloaded, remote assistance wasn't feasible. Better prioritization protocols—maybe vehicles in critical locations get remote assistance first—could help.

Fourth, and this is my own addition: better vehicle-to-vehicle communication. If one AV identifies a blackout condition, it could share that information with nearby AVs, allowing the fleet to coordinate a response rather than each vehicle failing individually.

What This Means for AV Adoption in 2025 and Beyond

The community reaction to this incident was more nuanced than simple rejection. Many commenters still believe in the long-term potential of AVs, but this event served as a reality check about the timeline and implementation.

Featured Apify Actor

Douyin Scraper

This powerful tool enables you to extract data from Douyin, the Chinese version of TikTok. Use it to scrape post data, l...

1.4M runs 544 users
Try This Actor

One theme that emerged strongly: We may need to rethink where AVs operate first. Dense urban centers with aging infrastructure (like parts of San Francisco) might be the hardest environments, not the easiest. Suburban areas or dedicated AV lanes with maintained infrastructure could be better starting points.

Another insight: Regulation will likely tighten. Several commenters predicted—and I agree—that incidents like this will lead to new requirements for AV emergency protocols. We might see mandates for minimum offline operation capabilities, or required testing in simulated infrastructure failure scenarios.

There's also the business model consideration. Waymo's voluntary suspension shows responsible caution, but it also highlights the fragility of these services. If a localized blackout can take down an entire metro area's service, that's a reliability issue that affects commercial viability. Companies might need to invest in more redundant systems than originally planned.

Personally, I think this incident accelerates a necessary shift in AV development philosophy. The focus has been on handling the 99% of normal conditions perfectly. Now, there's growing recognition that handling the 1% of abnormal conditions adequately is equally important—maybe more so, because that 1% often coincides with high-stakes situations.

Common Questions and Misconceptions

Let's address some of the recurring questions from the discussion that we haven't covered yet.

"Why don't they just program them to treat dark intersections as four-way stops?" It's not that simple. Identifying that an intersection is dark (not just that the traffic light is red) requires understanding context. Is it nighttime? Is there a power outage? Are other lights in the area working? This contextual awareness is challenging for current systems.

"Couldn't they just include a manual override?" Most AVs do have some manual controls, but they're designed for maintenance, not emergency driving. And putting a steering wheel in every vehicle partly defeats the purpose of full autonomy. The better solution is making the autonomy more robust.

"Is this a problem with all AVs or just Waymo?" While this specific incident involved Waymo, the underlying challenges affect all AV companies. Different companies might fail differently—some might try to keep moving and cause accidents rather than stopping—but the fundamental issue of operating in complete infrastructure failure affects everyone.

"What about Tesla's vision-only approach?" Interestingly, some commenters speculated that camera-only systems might actually handle this slightly better, since they're already designed to work without LiDAR. But they'd still struggle with the complete darkness. Really, any single-sensor approach has vulnerabilities—which is why most experts advocate for sensor fusion.

Moving Forward: Lessons and Next Steps

So where do we go from here? The blackout incident, while unfortunate, provides valuable learning opportunities if we're willing to pay attention.

For AV companies, the path forward involves more diverse testing. They need to actively seek out edge cases rather than waiting to encounter them accidentally. This might mean creating simulated blackouts, or testing in controlled environments with disabled infrastructure. It also means developing more sophisticated degradation protocols—ways for the vehicles to recognize when they're in suboptimal conditions and adjust their behavior accordingly.

For cities and regulators, there's work to do too. Infrastructure planning needs to consider AV requirements. Maybe critical intersections need backup power for traffic signals. Maybe we need standardized protocols for how AVs should behave during emergencies. This incident shows that AVs aren't just another vehicle—they're a new type of infrastructure that needs to be integrated thoughtfully.

For consumers and communities, the lesson is about managing expectations. Full autonomy in all conditions is further away than the hype sometimes suggests. But incremental progress is still progress. Each failure like this, if properly analyzed and addressed, makes the systems better.

What fascinates me most about this incident is how it reveals the gap between technical capability and practical robustness. The AVs didn't fail because they lack sophisticated technology—they failed because that technology wasn't prepared for a very real, if infrequent, real-world scenario. Closing that gap is the next great challenge in autonomous vehicles.

As one particularly insightful Reddit comment put it: "The measure of a system isn't how it performs on a sunny day with perfect visibility. It's how it performs at night, in the rain, when the power's out, and something unexpected happens." By that measure, we still have work to do. But acknowledging that is the first step toward building systems that can truly handle our complex world.

Emma Wilson

Emma Wilson

Digital privacy advocate and reviewer of security tools.