Back to blog
Opinion10 min read

The SQL Server Defaults That Still Ship Wrong in 2026

Evan Barke·DBA & Founder
·

Every few years Microsoft releases a new version of SQL Server and I check the defaults to see if they've finally fixed the ones that have been wrong since the early 2000s. SQL Server 2025 ships later this year. Some things have improved. Most of the bad defaults are still there.

I get why. Backward compatibility is sacred in the enterprise database world. You can't change a default that ten million installations depend on without risking breaking something. But the result is that every new SQL Server installation starts life with settings that any experienced DBA immediately changes.

Here are the defaults I still find myself fixing on nearly every server I touch.

Max Server Memory: Still Unlimited

This is the granddaddy of bad defaults and it's been the same since SQL Server 2005. The default value for max server memory is 2,147,483,647 MB. That's 2 petabytes. SQL Server will consume every byte of RAM on the machine and the OS will start paging to disk.

I understand why Microsoft can't change this. They don't know how much RAM your server has, what else is running on it, or how much memory your workload needs. Setting a conservative default would cause complaints from people whose queries suddenly run slower. So they leave it unlimited and rely on DBAs to set it correctly.

But here's the thing: most SQL Server installations are done by sysadmins or developers who aren't database specialists. They follow the installer wizard, click next a bunch of times, and move on. Nobody tells them that the database is going to starve the operating system of memory.

I've lost count of how many production outages I've investigated where the root cause was max server memory at default. The server has 64GB of RAM, SQL Server grabs all 64GB, the OS has no memory for file system cache or its own operations, and the whole thing falls over during peak load.

What it should be: The installer should detect physical memory and set max server memory to 90% (or physical memory minus 4GB, whichever leaves more for the OS). This is what every DBA does manually anyway.

Sponsored autodba.samix-technology.com
AutoDBA – SQL Server Diagnostics – Free Scan in 60 Seconds

Find missing indexes, blocking queries & performance issues instantly. No agent install. Upload a snapshot and get actionable recommendations.

Cost Threshold for Parallelism: Still 5

The cost threshold for parallelism determines the minimum estimated query cost before the optimizer considers a parallel execution plan. The default is 5. It's been 5 since SQL Server 7.0 in 1998.

In 1998, a cost of 5 was actually a reasonably expensive query. The cost model was calibrated for the hardware of the era: single-core CPUs, slow disk, limited memory. Today, a cost of 5 is nothing. A simple three-table join with a few thousand rows will exceed cost 5.

With the default of 5, SQL Server parallelizes everything. Tiny queries that would finish in 2ms on a single core instead get split across 8 threads, each doing almost nothing, with more time spent on thread coordination and exchange operators than on actual work. Meanwhile, those threads aren't available for other queries that genuinely need them.

The CXPACKET and CXCONSUMER waits that result from excessive parallelism are among the most common wait types I see in production. And the fix is almost always the same: raise the cost threshold.

What it should be: At least 25. Most experienced DBAs set it between 25 and 50, with some going to 100+ on pure OLTP systems. The default of 5 made sense on hardware from the Clinton administration. It doesn't make sense on a modern 64-core server.

MAXDOP: Still 0 (Use All Processors)

Related to cost threshold but separate: MAXDOP controls the maximum number of threads a single query can use when it goes parallel. The default is 0, which means "use all available logical processors."

On a 4-core server, this is fine. On a 64-core, 128-thread server, it means a single reporting query can consume every thread on the machine. Other queries queue up waiting for threads, and you get the classic symptom where one big query makes the entire application feel slow.

SQL Server 2019 improved this slightly by setting a smarter MAXDOP during installation if you choose the recommended settings. But upgrades don't change existing settings, and most production servers I encounter still have MAXDOP at 0 from their original install.

What it should be: Microsoft's own recommendation is the lesser of 8 or the number of physical cores per NUMA node. For most servers, 4 or 8 is a good starting point. A single query should never be allowed to monopolize the entire CPU.

Sponsored company.samix-technology.com/services/database-administration
Expert Database Administration – Samix Technology – DBA Services

Professional SQL Server & database administration services. Performance tuning, migration, high availability & 24/7 monitoring by certified DBAs.

Auto-Close: Still On for Express Edition

Auto-close is a setting that shuts down a database and releases its resources when the last user disconnects. Every time a new connection comes in, the database has to go through a full startup sequence: recovery, cache warming, the works.

On SQL Server Express Edition, auto-close defaults to ON. Express is Microsoft's free tier, widely used for small applications, development, and embedded scenarios. A lot of these applications have bursty connection patterns: connect, do some work, disconnect, wait, connect again.

With auto-close on, every burst of activity triggers a database shutdown and restart cycle. The first query after a gap is painfully slow because the entire buffer cache is cold. I've debugged "intermittent slow query" complaints that turned out to be nothing more than auto-close restarting the database between requests.

What it should be: OFF. Always. There is no production scenario where auto-close is beneficial. Microsoft keeps it on for Express because they want the free edition to use minimal resources when idle. But the performance penalty during startup is far worse than leaving the database open.

Auto-Shrink: Defaults Off But the Option Exists

I'll give Microsoft partial credit here. Auto-shrink defaults to off on new databases. But the fact that it exists as a checkbox in the UI that someone can enable is still a problem. I've covered the horrors of auto-shrink in a previous post so I'll keep this brief: it causes a cycle of shrink, fragment, rebuild, grow, shrink that burns IO for no benefit.

SQL Server 2022 added a "last resort" behavior where auto-shrink only kicks in under genuine disk pressure. That's better. But the real fix would be removing the option entirely and replacing it with an intelligent space management feature that reclaims space without destroying index structure.

Recovery Model: Simple by Default

New user databases default to the SIMPLE recovery model. This means transaction log backups aren't possible, which means point-in-time recovery isn't possible. If your database crashes at 3pm and your last full backup was at midnight, you've lost 15 hours of data.

I understand why this is the default. FULL recovery requires transaction log backups, and if you don't set them up, the log file grows until the disk is full. So Microsoft defaults to SIMPLE because it's self-managing. But the consequence is that many production databases are running in SIMPLE recovery because nobody changed it after creation, and nobody realizes they can't do a point-in-time restore until they need to.

What it should be: This one is genuinely hard. FULL recovery without log backups is dangerous in a different way. Maybe the answer is a first-run wizard that asks "Do you need to recover this database to a specific point in time?" and sets the model accordingly. The current approach of defaulting to the less safe option and hoping people change it doesn't work.

Query Store: Finally On by Default (Almost)

To end on a positive note: SQL Server 2022 made Query Store enabled by default for new databases. This is a big deal. Query Store is the single most valuable diagnostic tool in SQL Server. It captures query plans, runtime stats, and wait statistics at the query level. It's like having a flight recorder for your database.

Prior to 2022, Query Store was off by default, which meant most databases didn't have it. By the time a DBA needed it to diagnose a performance regression, there was no historical data to look at.

SQL Server 2025 takes this further with intelligent Query Store defaults and better automatic tuning integration. This is the right direction. More features should be on by default with sane settings, not off by default requiring an expert to enable them.

Why This Matters

Every bad default is a landmine waiting for someone who doesn't know it's there. Experienced DBAs fix these settings on day one and forget about them. But SQL Server runs on hundreds of thousands of servers managed by people who aren't database specialists. They're developers, sysadmins, DevOps engineers who installed SQL Server because their application needed it.

Those people shouldn't need 15 years of DBA experience to get reasonable performance out of a fresh install. The defaults should be good enough that a non-expert installation performs well without tuning. We're not there yet, but each version gets a little closer.

In the meantime, if you've got SQL Server instances running with out-of-the-box settings, it's worth spending 20 minutes to check and fix the basics. Or run a diagnostic scan and let the analysis tell you exactly what needs changing, with the T-SQL to fix it.

Want to find issues on your SQL Server?

Run AutoDBA's free diagnostic scan and get results in 60 seconds. No agent install required.

Get Started Free