Proxies & Web Scraping

SSD Endurance Test: 3.58PB on 256GB Drive & What It Means

Lisa Anderson

Lisa Anderson

February 26, 2026

13 min read 13 views

When a 256GB Samsung PM981 SSD survived 3.58 petabytes of writes—23 times its rated TBW—it challenged everything we thought about NAND endurance. We break down what this extreme case teaches us about real-world SSD reliability, monitoring, and choosing drives for heavy workloads in 2026.

proxy, proxy server, free proxy, online proxy, proxy site, proxy list, web proxy, web scraping, scraping, data scraping, instagram proxy

The SSD That Refused to Die: 3.58 Petabytes on a 256GB Drive

You know that feeling when you check your car's odometer and see it's rolled past 300,000 miles? Now imagine that, but for your SSD. That's exactly what happened when a Reddit user discovered their humble 256GB Samsung PM981—the OEM version of the popular 970 EVO—had written a staggering 3.58 petabytes of data. The drive was officially rated for just 150 terabytes written (TBW). It had exceeded its warranty by 2,286%. And yet, according to the post, it was still chugging along, running an Arma 3 server with aggressive logging and constant swapping.

This isn't just a fun tech anecdote—it's a case study that challenges everything we thought we knew about SSD endurance. In 2026, with SSDs dominating everything from laptops to data centers, understanding what really happens when you push storage to its limits matters more than ever. We're going to break down what this extreme example teaches us, what those "uncorrectable error" counts actually mean, and how you should think about SSD reliability for your own projects.

Oh, and we'll answer the burning question everyone in that Reddit thread was asking: Should you try this with your own drives? (Spoiler: Probably not.)

Understanding the Numbers: From TBW to Petabytes

Let's start with the basics, because the numbers here are genuinely mind-boggling. The Samsung PM981 256GB drive has a manufacturer-rated endurance of 150 TBW. That's Terabytes Written—a standard metric that represents how much data you can write to the drive before the NAND flash cells start wearing out. In theory, once you hit that number, the drive could fail at any moment.

Our heroic drive wrote 3.58 petabytes. One petabyte is 1,000 terabytes. So we're talking about 3,580 terabytes written. Do the math: 3,580 ÷ 150 = 23.87. This drive wrote nearly 24 times its rated endurance.

To put that in perspective: If you wrote 50GB to this drive every single day (which is a lot for most users), it would take you 8,200 days to reach 3.58PB. That's over 22 years of heavy daily use. The Arma 3 server managed it in what was presumably a much shorter timeframe through constant, heavy write operations.

The post mentions the drive was at "170% usage"—this likely refers to the "Percentage Used" attribute in SMART data, which often caps at 100% but some tools extrapolate beyond. And those "more errors than stars in the universe"? That's probably the "Uncorrectable Error Count" or "Media and Data Integrity Errors" in SMART. High numbers here don't necessarily mean data corruption—they often represent corrected errors that the drive's internal ECC (Error Correction Code) handled automatically.

Why Server Workloads Are SSD Killers (Usually)

The original post gives us crucial context: This drive was running an Arma 3 server with "aggressive logging and constant OS swapping." This is important because it explains how the drive accumulated such insane write volumes.

Game servers—especially modded ones like Antistasi Ultimate—generate logs constantly. Every player action, every server event, every error gets written to disk. When you combine that with the memory swapping mentioned (the server had 16GB RAM but was likely using swap space), you get a perfect storm of write operations. The drive wasn't just storing data—it was being used as an extension of system memory and a real-time recording device.

Most consumer SSDs aren't designed for this kind of workload. They're optimized for the "bursty" pattern of typical PC use: boot up, load some applications, save some files, then idle. Server workloads are different—they're often sustained, heavy writes that never let up. That's why enterprise SSDs exist, with much higher endurance ratings and features like power-loss protection.

What's fascinating here is that a consumer-grade (well, OEM) drive survived this abuse. It suggests that Samsung's V-NAND technology—even in their 2020-era drives—had more headroom than the conservative ratings implied.

SMART Data: Reading Between the Error Lines

proxy, proxy server, free proxy, online proxy, proxy site, proxy list, web proxy, web scraping, scraping, data scraping, instagram proxy

When the post mentions "more errors than there are stars in the universe," it's probably referring to SMART attribute 187 (Reported Uncorrectable Errors) or similar. Here's what you need to know about interpreting these numbers in 2026.

First, not all SMART errors are created equal. Modern SSDs have sophisticated error correction built in. When the drive reads data and detects an error, it first tries to correct it using ECC. If it succeeds, it might still increment an error counter—but your data remains intact. These are "corrected errors," and while they indicate NAND wear, they don't mean data loss.

"Uncorrectable errors" are more serious—these are errors the drive couldn't fix. But even here, context matters. Some controllers will remap bad blocks transparently, so you might see high error counts without experiencing actual problems.

The key is to monitor trends, not just absolute numbers. If your error count was stable for years and suddenly starts climbing rapidly, that's a warning sign. If it's been high but stable (like our 3.58PB champion), the drive might have plenty of life left.

Tools like CrystalDiskInfo, smartctl (for Linux), or Samsung's own Magician software can give you the raw numbers. But understanding what they mean requires knowing your specific drive's behavior patterns.

The Real-World vs. Manufacturer Ratings Gap

This case highlights something storage enthusiasts have suspected for years: Manufacturer TBW ratings are conservative. Very conservative. They're warranty limits, not hard failure points.

Want blockchain development?

Build decentralized apps on Fiverr

Find Freelancers on Fiverr

Think of it like a car's speedometer that goes to 160 mph. The manufacturer might say "don't exceed 100 mph," but the car can go faster. They're limiting their liability, not stating a physical limit. Similarly, SSD manufacturers test their NAND to determine a safe warranty threshold, then add a margin of safety on top of that.

Several factors influence how much you can actually write:

  • NAND type: TLC (Triple-Level Cell) drives like the PM981 have lower endurance than MLC (Multi-Level Cell) but higher than QLC (Quad-Level Cell). Newer 2026 drives using 3D NAND with more layers often have better endurance than their specs suggest.
  • Over-provisioning: Drives reserve extra space not visible to the user for wear leveling and bad block replacement. More over-provisioning means longer life.
  • Controller efficiency: A smart controller can distribute writes more evenly, reducing wear on any single NAND cell.
  • Workload pattern: Sequential writes are easier on NAND than random writes. Our Arma server was probably doing both.

The takeaway? Don't panic if you exceed your drive's TBW rating. But also don't assume every drive will achieve 24x its rating—this case is exceptional.

Monitoring Your Own Drives: Practical Tools for 2026

So you're running a server, or maybe just a demanding application, and you want to keep tabs on your SSD's health. What should you actually do in 2026?

First, get familiar with SMART monitoring tools. On Windows, CrystalDiskInfo remains excellent and free. On Linux, smartctl (part of smartmontools) gives you command-line access to all the raw data. For Samsung drives specifically, their Magician software provides a more user-friendly interface and can sometimes give you insights other tools miss.

Key attributes to watch:

  • Percentage Used: Often normalized to 100 at the TBW limit, but some drives report beyond 100%.
  • Media and Data Integrity Errors: This is the critical one—actual uncorrectable errors.
  • Available Spare: How much reserve capacity the drive has left for bad block replacement.
  • Temperature: Heat accelerates NAND wear. Keep your drives cool.

Set up alerts. Most monitoring tools can notify you when certain thresholds are crossed. I'd recommend setting an alert at 80% of TBW for critical drives, and definitely at 100%.

But here's a pro tip: Also monitor your write rate. Tools like CrystalDiskInfo show total host writes. Check it monthly. If you're writing 1TB per month to a 500TBW drive, you know you've got about 41 years of life. If you're writing 10TB per month, that's 4 years. Simple math gives you much better planning than waiting for SMART warnings.

When Should You Actually Replace a Drive?

proxy, proxy server, free proxy, online proxy, proxy site, proxy list, web proxy, web scraping, scraping, data scraping, instagram proxy

This is the million-dollar question. Our 3.58PB drive was still working despite astronomical error counts. So when do you pull the plug?

Here's my practical hierarchy for replacement decisions:

Immediate replacement (back up and replace NOW):

  • Any uncorrectable errors that cause actual data corruption or read failures
  • Reallocated sector count climbing rapidly (dozens per day)
  • Drive disappears from BIOS intermittently
  • Extreme performance degradation (we're talking minutes to open a file)

Plan replacement soon (order a new drive):

  • Exceeded TBW by more than 50% in a critical system
  • Available spare below 10%
  • Error counts increasing steadily week over week
  • You're relying on this drive for important data without recent backups

Monitor closely but don't panic:

  • High but stable error counts (like our example drive)
  • Exceeded TBW but performance is normal
  • Minor correctable errors that aren't increasing
  • Non-critical data with good backups

The Arma server drive probably fell into the last category for its owner. It was a boot drive for a game server. The data was likely reproducible (reinstall the server) or backed up elsewhere. The risk of failure was acceptable given the drive's proven resilience.

For your main workstation with irreplaceable data? Different story entirely.

Choosing Drives for Heavy Write Workloads in 2026

If you're setting up a server, a logging system, or any application that writes constantly, what should you look for in 2026?

First, consider enterprise or datacenter SSDs. They cost more, but they're built for this. Look for drives with DWPD (Drive Writes Per Day) ratings. A 1 DWPD drive rated for 1TB means you can write 1TB every day for its warranty period. That's a much more useful metric for server planning than TBW alone.

Featured Apify Actor

Linkedin Profile Search By Name scraper ✅ No Cookies

Search for LinkedIn profiles by name with filters and extract detailed profile information, including work experience, e...

2.0M runs 356 users
Try This Actor

Second, look at the warranty. A 5-year warranty with unlimited writes (or very high TBW) tells you the manufacturer has confidence in their product. Samsung's PRO series, WD's Red SSDs for NAS, and Intel's DC series have historically been good choices.

Third, consider the technology. In 2026, we're seeing PLC (Penta-Level Cell) drives entering the market with even higher densities but lower endurance. For write-heavy workloads, you might want to stick with TLC or even MLC if you can find it. 3D NAND with more layers generally offers better endurance than planar NAND.

And here's something counterintuitive: Sometimes a larger drive than you need is better. A 1TB drive with 600 TBW rating might last longer than a 256GB drive with 150 TBW, even if you're only storing 200GB of data. The larger drive has more NAND cells to spread writes across.

For those managing multiple servers or data collection systems, automation tools can help monitor drive health across your infrastructure. While not specifically for drive monitoring, platforms like Apify can be adapted to scrape SMART data from management interfaces or set up automated health check systems, though you'd typically use dedicated monitoring solutions for this specific task.

Backup Strategies When Pushing Limits

If you're going to run drives hard—whether it's a game server, a data scraping operation, or a media editing workstation—your backup strategy needs to match your risk tolerance.

The 3-2-1 rule still applies in 2026: Three copies of your data, on two different media, with one offsite. But for write-heavy applications, consider these additions:

Frequent incremental backups: If your data changes constantly, daily or even hourly backups might be necessary. Tools like Veeam, Duplicati, or even simple rsync scripts can automate this.

Separate media for logs: If logging is causing most of your writes (like the Arma server), consider writing logs to a different drive. A cheap SATA SSD or even a hard drive can handle log writes without wearing out your main drive.

RAM disks for temp files: For applications that write lots of temporary data, a RAM disk (using system memory as a temporary drive) can eliminate millions of write operations. Just remember it vanishes on reboot.

Cloud sync as pseudo-backup: Services that sync to cloud storage (like Dropbox, Google Drive, or Backblaze) can serve as both accessibility tools and backup, though they're not a complete replacement for proper backups.

And here's the reality check: If you can't afford to lose the data, you can't afford to push your drives to their limits. The owner of that 3.58PB drive was probably comfortable reinstalling Arma and its mods. Your financial records or client data? Different story entirely.

What This Extreme Case Really Teaches Us

That Samsung PM981's journey to 3.58 petabytes written isn't just a cool story—it's a data point that changes how we think about storage reliability. Here's what I take away from it:

First, modern NAND flash is incredibly resilient. We've come a long way from the early SSD days when drives would fail suddenly and completely. Today's drives degrade gradually, giving you warning signs if you're paying attention.

Second, manufacturer ratings have healthy margins. That's good for consumers—it means our drives often outlast their warranties. But it's not a license to ignore those ratings entirely, especially for critical data.

Third, context matters enormously. A drive with high error counts in a non-critical system with good backups might be perfectly fine to keep using. The same error counts in your primary work computer should trigger immediate action.

Finally, monitoring is everything. That Reddit user knew their drive's status because they checked. Most people never look at SMART data until something goes wrong. Make it part of your regular maintenance routine—monthly for critical systems, quarterly for others.

In 2026, with storage cheaper than ever but data more valuable than ever, understanding the gap between specification sheets and real-world performance gives you an edge. You can make smarter buying decisions, set up more reliable systems, and avoid both unnecessary paranoia and costly failures.

That 256GB Samsung PM981 might be an "absolute unit," but it's also a reminder: Our tools are often more capable than we give them credit for. We just need to understand their actual limits, not just the ones printed on the box.

Lisa Anderson

Lisa Anderson

Tech analyst specializing in productivity software and automation.