Back to blog
Case Study6 min read

We Analysed a Production SQL Server and Found 27 Issues in 60 Seconds

Evan Barke·DBA & Founder
·

Last week I decided to eat my own dogfood. We pointed AutoDBA at one of our production SQL Server instances to see what it would actually find on a real system that a human DBA (me) has been managing for years.

The server in question: 16 CPUs, 127GB RAM, SQL Server 2022 Developer Edition, running a mix of OLTP and reporting workloads. I've been looking after this thing for a while now, so I figured it was in pretty good shape.

I was wrong.

Running the Collection

The whole process took about 90 seconds end to end. You run a PowerShell script on the server, it collects diagnostics via a SQL script, compresses and encrypts the output, and spits out a .autodba file. You upload that file to the dashboard and the analysis kicks off.

The rule-based engine finished in under 2 seconds. Then the AI analysis took maybe another 40 seconds. When the results came back I honestly stared at the screen for a minute.

27 issues. 1 critical. 13 high severity.

On a server I thought was well tuned.

Sponsored autodba.samix-technology.com
AutoDBA – SQL Server Diagnostics – Free Scan in 60 Seconds

Find missing indexes, blocking queries & performance issues instantly. No agent install. Upload a snapshot and get actionable recommendations.

The Critical Finding: SOS_WORK_DISPATCHER Saturation

The AI flagged a critical issue I had completley missed. SOS_WORK_DISPATCHER waits were consuming 96.7% of total wait time. This is a worker thread scheduling issue that indicates severe parallelism problems.

I knew we had some wait stats noise but I'd been filtering this out mentally as "background stuff." The AI didn't have that bias. It looked at the numbers, saw 96.7%, and correctly flagged it as the single biggest performance bottleneck on the server.

The recommendation came with specific T-SQL to investigate and a remediation plan involving MAXDOP tuning. Not generic advice you'd find on a blog post, but specific to our workload profile.

31 Database Files With Low Free Space

This one was embarassing. 31 database files were running at less than 5% free space. Some were at 0%. That means every write operation triggers an autogrowth event, which blocks everything while the file expands.

I knew a few files were tight on space but I didn't realize how widespread it was. When you're managing dozens of databases it's easy to lose track of individual file sizes. AutoDBA just scanned them all in one pass and said "hey, 31 of these are about to cause you problems."

Sponsored company.samix-technology.com/services/database-administration
Expert Database Administration – Samix Technology – DBA Services

Professional SQL Server & database administration services. Performance tuning, migration, high availability & 24/7 monitoring by certified DBAs.

The Query That Was Eating 110 Seconds of CPU

The AI spotted a single query execution that consumed 110 seconds of CPU time. In one execution. It was an index usage collection query (ironic, given that we're building a tool that analyses indexes) that was doing a massive scan across system views.

It also found 938 executions of xp_cmdshell consuming 126 seconds of total CPU. The recommendation was to batch these operations instead of firing them individually. Makes total sense in hindsight.

12 Unused Indexes Wasting Write Performance

We had 12 indexes that had zero seeks, zero scans, and zero lookups since the last restart. But they were still being maintained on every insert and update. That's pure overhead for no benefit.

The combined wasted space was about 66MB which isn't huge, but the write amplification on a busy OLTP system adds up. One of those indexes had 12,769 updates and zero reads. Thats a lot of wasted IO.

What the AI Found That Rules Missed

The rule engine caught the obvious stuff: configuration gaps, missing indexes with high improvement scores, IO latency above thresholds. It's good at checking known patterns against known thresholds.

But the AI found things that require context. It connected the SOS_WORK_DISPATCHER waits to the parallelism configuration. It noticed the buffer pool was running below target memory and linked that to the plan cache hit ratio. It recommended enabling Query Store on all production databases as a long-term diagnostics strategy.

These aren't things you can catch with simple threshold checks. They require understanding how different metrics relate to each other, which is exactly what a senior DBA does when they review a system. Except the AI did it in 40 seconds instead of 4 hours.

The Takeaway

I've been a DBA for over 15 years and I was genuinely surprised by what AutoDBA found on my own server. Not because the issues were exotic or unusual, but because they were the kind of things that accumulate slowly and you stop noticing.

Every DBA thinks their servers are in decent shape. Most of the time they're right about the big things and wrong about the dozens of small things that add up to real performance problems.

If you want to try it on your own SQL Server, the free tier gives you the rule-based analysis. The starter plan at $99/month adds the AI analysis with specific T-SQL remediation scripts.

Want to find issues on your SQL Server?

Run AutoDBA's free diagnostic scan and get results in 60 seconds. No agent install required.

Get Started Free