Back to blog
Opinion11 min read

I Stopped Rebuilding Indexes Every Sunday Night (Here's What Happened)

Evan Barke·DBA & Founder
·

Every SQL Server DBA I've ever met has an index maintenance job. It runs Sunday night, usually between midnight and 6am. It checks fragmentation levels, rebuilds anything over 30%, reorganizes anything between 10% and 30%, and maybe updates statistics along the way. Ola Hallengren's maintenance solution is probably the most common implementation, though plenty of shops have homegrown scripts that do essentially the same thing.

I had one too. It ran every Sunday at 1am on every production server. It had been running for years. Nobody questioned it. It was just part of the landscape, like backups and DBCC CHECKDB.

Then about two years ago, I started questioning it. And six months ago, I turned it off on a busy OLTP system. Here's what I learned.

The Conventional Wisdom

The reasoning behind regular index rebuilds goes like this: as data is inserted, updated, and deleted, index pages become fragmented. Fragmented indexes mean more IO because SQL Server has to read non-contiguous pages. More IO means slower queries. Therefore, periodically rebuild indexes to remove fragmentation, reduce IO, and improve performance.

This logic is sound. It's also based on assumptions that are increasingly outdated.

Sponsored autodba.samix-technology.com
AutoDBA – SQL Server Diagnostics – Free Scan in 60 Seconds

Find missing indexes, blocking queries & performance issues instantly. No agent install. Upload a snapshot and get actionable recommendations.

What Changed: SSDs

The fragmentation argument was developed in the era of spinning disk. On a mechanical hard drive, fragmentation matters a lot because the read head has to physically move to different locations on the platter. Sequential reads (contiguous pages) are dramatically faster than random reads (scattered pages). A fragmented index forces random IO, which on spinning disk is 100-200 times slower than sequential IO.

On an SSD, there is no read head. There is no seek time. Random reads are almost as fast as sequential reads. The throughput difference between reading 1,000 contiguous pages and 1,000 scattered pages on a modern NVMe drive is negligible. Some enterprise SSDs actually show no measurable difference at all.

If your SQL Server is running on SSDs (and in 2026, most production servers are), the primary justification for index rebuilds, reducing the IO cost of fragmentation, is largely irrelevant.

I say "largely" because there's a secondary effect: fragmented indexes can have lower page density, meaning more pages to store the same amount of data, meaning more memory consumed in the buffer pool. That's still a valid concern. But it's a different argument than "fragmentation causes slow reads."

The Cost Nobody Talks About

Index rebuilds are expensive operations. A rebuild drops and recreates the entire index structure. On a large table, this means:

  • Massive IO. Every page of the index is read and rewritten. A 50GB index means 50GB of reads and 50GB of writes. Multiply by however many indexes you're rebuilding.
  • Transaction log explosion. With FULL recovery model, every page write is logged. Rebuilding a 50GB index can generate 100GB+ of transaction log. Your log backups balloon. Your disk IO is saturated with log writes.
  • CPU usage. Sorting and reorganizing the data is CPU-intensive. During a rebuild window, your server's CPU may be pegged at 80-90%.
  • Blocking (offline rebuilds). Without Enterprise Edition's online rebuild capability, the table is locked during the rebuild. No reads, no writes. On Standard Edition, which is what most shops run, this means your maintenance window has to be long enough to rebuild every index without overlapping with application activity.
  • Statistics go stale. After a rebuild, the statistics are refreshed, which is good. But it also means that any auto-updated statistics from the current workload are replaced with the rebuild's full-scan statistics, which may trigger plan recompilations across the board.

That Sunday night rebuild job isn't free. It's consuming hours of IO, CPU, and log space every week. And if your indexes weren't fragmented enough to cause problems, you're burning all those resources for nothing.

Sponsored company.samix-technology.com/services/database-administration
Expert Database Administration – Samix Technology – DBA Services

Professional SQL Server & database administration services. Performance tuning, migration, high availability & 24/7 monitoring by certified DBAs.

What I Actually Did

I didn't just blindly turn off maintenance. I measured first. For three months, I tracked two things on a busy OLTP system (roughly 8,000 transactions per second):

1. Fragmentation levels before the Sunday rebuild. I captured fragmentation for every index over 1,000 pages using sys.dm_db_index_physical_stats with LIMITED mode (because DETAILED mode on a busy system is its own problem).

2. Query performance metrics from Query Store. Average duration, CPU time, and logical reads for the top 100 queries, captured daily.

What I found: most indexes were below 15% fragmentation before the rebuild. The heavily-written indexes got back to 15%+ within a day or two after the rebuild. The read-heavy indexes that barely changed were at 2-3% fragmentation. We were rebuilding hundreds of indexes every week to fix a handful that would refragment immediately anyway.

The query performance data was more revealing. I compared Monday morning performance (right after the rebuild) to Saturday afternoon performance (right before the next rebuild). The difference in average query duration was within noise: less than 2% variation, well within normal workload fluctuation.

So I turned off the weekly rebuild and switched to a targeted approach.

The Targeted Approach

Instead of "rebuild everything every Sunday," I now do this:

Update statistics on a schedule. This is the part that actually matters for query performance. Stale statistics lead to bad cardinality estimates, which lead to bad plans. I run UPDATE STATISTICS with FULLSCAN on the most active tables twice a week, and with the default sample rate on everything else once a week.

Rebuild only when it matters. I have a script that checks fragmentation weekly and only rebuilds indexes that meet all three criteria: over 30% fragmented, over 10,000 pages (small indexes don't benefit from rebuilds), and actually used by queries (checked against sys.dm_db_index_usage_stats). This typically means rebuilding 5-10 indexes per week instead of 500.

Monitor page density, not fragmentation percentage. On SSD-backed storage, I care more about page density (how full are the pages?) than fragmentation (how ordered are the pages?). Low page density wastes buffer pool memory. High fragmentation on SSDs doesn't do much.

-- Check page density for indexes that matter
SELECT
    OBJECT_NAME(ips.object_id) AS table_name,
    i.name AS index_name,
    ips.avg_page_space_used_in_percent AS page_density,
    ips.avg_fragmentation_in_percent AS fragmentation,
    ips.page_count
FROM sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL, 'SAMPLED') ips
JOIN sys.indexes i
    ON ips.object_id = i.object_id AND ips.index_id = i.index_id
WHERE ips.page_count > 10000
    AND ips.avg_page_space_used_in_percent < 70
ORDER BY ips.page_count DESC;

If page density drops below 70%, that's when I consider a rebuild. A 70% page density means you're wasting 30% of the buffer pool memory dedicated to that index. That's worth fixing. But fragmentation at 40% on an SSD? I'll leave it alone.

What Happened After Six Months

Here's what I measured after six months of the new approach:

IO throughput during the maintenance window dropped by 85%. We went from hours of sustained 100% IO utilization to a few minutes of targeted rebuilds.

Transaction log volume from maintenance dropped by over 90%. Fewer rebuilds means dramatically less log generation. Our log backup storage costs dropped proportionally.

Query performance was unchanged. Average query durations, CPU times, and logical reads were statistically identical to the before period. The weekly rebuild cycle was not providing any measurable performance benefit for this workload on this hardware.

The maintenance window shrunk from 4 hours to 20 minutes. Mostly statistics updates with a handful of targeted rebuilds. This matters because the shorter the maintenance window, the more time the server is available for actual work.

Where This Doesn't Apply

Before someone fires off an angry comment: this approach is specific to SSD-backed OLTP systems. If you're still on spinning disk, fragmentation still matters and regular rebuilds are still important. If you're running a data warehouse with large sequential scans, fragmentation matters even on SSDs because the scan pattern benefits from contiguous pages. If you're on Standard Edition and your rebuilds are offline, you might want to rebuild less frequently but the rebuilds themselves matter more because they're the only time you can defragment.

Also, this is for index fragmentation maintenance. Statistics updates remain critical regardless of storage type. If anything, I update statistics more frequently now that I'm not relying on the rebuild to do it as a side effect.

The Point

The Sunday night index rebuild is one of those practices that made perfect sense when it started and has been carried forward on momentum rather than evidence. Every DBA inherits it, maintains it, and passes it on. Nobody questions it because it feels like the responsible thing to do. Questioning it feels reckless.

But if you measure it, and measuring is the key word here, you might find what I found: on modern hardware, for many workloads, weekly full index rebuilds are all cost and no benefit. The resources you're spending on maintenance could be better spent on actual queries.

I'm not saying turn off your maintenance jobs tomorrow. I'm saying measure whether they're actually helping. Track query performance before and after. Check fragmentation accumulation rates. Look at how much IO and log space the rebuilds consume. Then make an informed decision based on your specific workload and hardware.

That kind of analysis, comparing maintenance costs against actual performance impact, is something we built into AutoDBA's diagnostic. It's one of those things that's obvious in hindsight but most people never look at because the rebuild job has always been there and it's never been a problem. Except it might be a problem you've just never quantified.

Want to find issues on your SQL Server?

Run AutoDBA's free diagnostic scan and get results in 60 seconds. No agent install required.

Get Started Free