Quick Answer
SQL Server deadlocks occur when two or more sessions hold locks that each other needs to proceed, creating a circular dependency. The Lock Monitor detects these cycles every 5 seconds and terminates the least expensive transaction as the deadlock victim. Deadlocks are critical performance issues that indicate poor transaction design or missing indexes.
Root Cause Analysis
Deadlocks manifest when the Lock Manager's circular dependency detection algorithm identifies a resource wait chain where session A holds a lock that session B needs, while session B holds a lock that session A needs. The Lock Monitor thread runs on a fixed 5-second interval (not user-configurable) to detect these cycles by traversing the lock wait-for graph. Trace flag 1204 enables legacy text-format deadlock reporting to the SQL Server error log but does not affect the detection interval.
SQL Server 2016 introduced extended events for deadlock capture that replaced profiler traces, providing more efficient deadlock monitoring. The Lock Manager maintains separate lock hash tables per partition in SQL Server 2014+, reducing lock manager contention but potentially making cross-partition deadlocks more complex to diagnose.
SQL Server 2019 improved deadlock detection performance by optimizing the lock monitor's graph traversal algorithm.
The deadlock victim selection algorithm chooses based on deadlock priority (SET DEADLOCK_PRIORITY), transaction log usage, and computational cost. Sessions with DEADLOCK_PRIORITY LOW are always chosen as victims unless both sessions have the same priority, then SQL Server selects the session with the least expensive rollback operation.
AutoDBA checks Deadlock frequency, victim patterns, and lock escalation thresholds across your entire SQL Server instance in 60 seconds. Download the free diagnostic script and see what else needs attention.
Diagnostic Queries
-- Query 1: Current deadlock victims and frequency from system health session
SELECT
CAST(event_data AS XML).value('(/event/@timestamp)[1]', 'datetime2') AS event_time,
CAST(event_data AS XML).value('(/event/data[@name="xml_report"]/value)[1]', 'xml') AS deadlock_graph,
CAST(event_data AS XML).value('(/event/action[@name="database_name"]/value)[1]', 'nvarchar(128)') AS database_name
FROM sys.fn_xe_file_target_read_file('system_health*.xel', NULL, NULL, NULL)
WHERE object_name = 'xml_deadlock_report'
ORDER BY event_time DESC;
-- Query 2: Extract deadlock details including victim and process information
WITH deadlock_data AS (
SELECT
CAST(event_data AS XML) AS deadlock_xml,
CAST(event_data AS XML).value('(/event/@timestamp)[1]', 'datetime2') AS event_time
FROM sys.fn_xe_file_target_read_file('system_health*.xel', NULL, NULL, NULL)
WHERE object_name = 'xml_deadlock_report'
)
SELECT
d.event_time,
victim.v.value('@id', 'varchar(50)') AS victim_process_id,
proc.p.value('@id', 'varchar(50)') AS process_id,
proc.p.value('@lockMode', 'varchar(10)') AS process_lock_mode,
proc.p.value('(inputbuf)[1]', 'nvarchar(max)') AS process_sql_text
FROM deadlock_data d
CROSS APPLY d.deadlock_xml.nodes('//deadlock/victim-list/victimProcess') AS victim(v)
CROSS APPLY d.deadlock_xml.nodes('//deadlock/process-list/process') AS proc(p)
ORDER BY d.event_time DESC;
-- Query 3: Analyze lock wait statistics for deadlock-prone resources
-- Join sys.dm_tran_locks with sys.dm_os_waiting_tasks to get blocking info
SELECT
tl.resource_type,
tl.resource_database_id,
DB_NAME(tl.resource_database_id) AS database_name,
tl.resource_description,
tl.request_mode,
tl.request_type,
tl.request_status,
tl.request_session_id,
wt.blocking_session_id,
wt.wait_type,
wt.wait_duration_ms
FROM sys.dm_tran_locks tl
LEFT JOIN sys.dm_os_waiting_tasks wt
ON tl.lock_owner_address = wt.resource_address
WHERE tl.request_status = 'WAIT'
ORDER BY tl.resource_database_id, tl.resource_description;
-- Query 4: Identify sessions with high lock acquisition patterns
SELECT
s.session_id,
s.login_name,
s.program_name,
s.host_name,
COUNT(*) AS lock_count,
SUM(CASE WHEN tl.request_status = 'WAIT' THEN 1 ELSE 0 END) AS waiting_locks,
r.blocking_session_id,
r.wait_type,
r.wait_time,
SUBSTRING(st.text, (r.statement_start_offset/2) + 1,
((CASE WHEN r.statement_end_offset = -1 THEN LEN(CONVERT(nvarchar(max), st.text)) * 2
ELSE r.statement_end_offset END - r.statement_start_offset)/2) + 1) AS current_statement
FROM sys.dm_exec_sessions s
INNER JOIN sys.dm_tran_locks tl ON s.session_id = tl.request_session_id
LEFT JOIN sys.dm_exec_requests r ON s.session_id = r.session_id
OUTER APPLY sys.dm_exec_sql_text(r.sql_handle) st
WHERE s.is_user_process = 1
GROUP BY s.session_id, s.login_name, s.program_name, s.host_name,
r.blocking_session_id, r.wait_type, r.wait_time,
SUBSTRING(st.text, (r.statement_start_offset/2) + 1,
((CASE WHEN r.statement_end_offset = -1 THEN LEN(CONVERT(nvarchar(max), st.text)) * 2
ELSE r.statement_end_offset END - r.statement_start_offset)/2) + 1)
HAVING COUNT(*) > 50
ORDER BY lock_count DESC;
-- Query 5: Check for missing indexes that could cause lock escalation
SELECT
migs.group_handle,
mid.statement AS table_name,
mid.equality_columns,
mid.inequality_columns,
mid.included_columns,
migs.unique_compiles,
migs.user_seeks,
migs.avg_total_user_cost,
migs.avg_user_impact,
(migs.user_seeks + migs.user_scans) * migs.avg_total_user_cost * (migs.avg_user_impact * 0.01) AS index_advantage
FROM sys.dm_db_missing_index_group_stats migs
INNER JOIN sys.dm_db_missing_index_groups mig ON migs.group_handle = mig.index_group_handle
INNER JOIN sys.dm_db_missing_index_details mid ON mig.index_handle = mid.index_handle
WHERE migs.avg_user_impact > 50
ORDER BY index_advantage DESC;
Fix Scripts
Enable deadlock monitoring with extended events Creates a dedicated extended events session to capture detailed deadlock information including execution plans and resource details. This replaces the deprecated SQL Profiler approach and provides better performance monitoring.
-- WARNING: Test in development first - creates permanent XE session
IF EXISTS (SELECT * FROM sys.server_event_sessions WHERE name = 'deadlock_monitor')
DROP EVENT SESSION deadlock_monitor ON SERVER;
CREATE EVENT SESSION deadlock_monitor ON SERVER
ADD EVENT sqlserver.xml_deadlock_report(
ACTION(sqlserver.client_app_name,
sqlserver.client_hostname,
sqlserver.database_name,
sqlserver.query_hash,
sqlserver.query_plan_hash,
sqlserver.sql_text,
sqlserver.username))
ADD TARGET package0.event_file(
SET filename = N'C:\DeadlockLogs\deadlock_monitor',
max_file_size = 50,
max_rollover_files = 10)
WITH (MAX_MEMORY = 8192 KB,
EVENT_RETENTION_MODE = ALLOW_SINGLE_EVENT_LOSS,
MAX_DISPATCH_LATENCY = 5 SECONDS,
MAX_EVENT_SIZE = 0 KB,
MEMORY_PARTITION_MODE = NONE,
TRACK_CAUSALITY = OFF,
STARTUP_STATE = ON);
ALTER EVENT SESSION deadlock_monitor ON SERVER STATE = START;
Implement application deadlock retry logic template Provides T-SQL template for implementing exponential backoff retry logic in stored procedures. This handles transient deadlock victims gracefully without requiring application changes.
-- Template for deadlock-aware stored procedure with retry logic
-- Modify @max_retries and base delay as needed for your workload
CREATE OR ALTER PROCEDURE sp_YourProcedureWithRetry
@param1 INT,
@param2 VARCHAR(50)
AS
BEGIN
SET NOCOUNT ON;
DECLARE @retry_count INT = 0;
DECLARE @max_retries INT = 3;
DECLARE @base_delay INT = 100; -- milliseconds
DECLARE @error_number INT;
DECLARE @delay_ms INT;
WHILE @retry_count <= @max_retries
BEGIN
BEGIN TRY
BEGIN TRANSACTION;
-- Your original procedure logic here
-- Example: UPDATE table1 SET col1 = @param1 WHERE id = @param2;
COMMIT TRANSACTION;
RETURN 0; -- Success
END TRY
BEGIN CATCH
SET @error_number = ERROR_NUMBER();
IF @@TRANCOUNT > 0
ROLLBACK TRANSACTION;
-- Check if this is a deadlock victim (1205)
IF @error_number = 1205 AND @retry_count < @max_retries
BEGIN
SET @retry_count += 1;
-- Simple fixed delay between retries (100ms)
-- For exponential backoff, compute with DATEADD and CONVERT to time
WAITFOR DELAY '00:00:00.100';
CONTINUE; -- Retry the operation
END
-- Re-throw non-deadlock errors or exceeded retry limit
THROW;
END CATCH
END
END;
Optimize transaction scope and lock hints Reduces deadlock probability by minimizing transaction duration and using appropriate lock hints. This script identifies long-running transactions and provides recommendations.
-- Identify and optimize problematic transaction patterns
-- WARNING: Analyze results before implementing NOLOCK or READUNCOMMITTED hints
SELECT
s.session_id,
s.login_name,
dt.transaction_begin_time,
DATEDIFF(second, dt.transaction_begin_time, GETDATE()) AS transaction_duration_seconds,
dt.transaction_type,
dt.transaction_state,
CASE dt.transaction_state
WHEN 0 THEN 'Uninitialized'
WHEN 1 THEN 'Not started'
WHEN 2 THEN 'Active'
WHEN 3 THEN 'Read-only ended'
WHEN 4 THEN 'Distributed prep'
WHEN 5 THEN 'Prepared'
WHEN 6 THEN 'Committed'
WHEN 7 THEN 'Rolling back'
WHEN 8 THEN 'Rolled back'
END AS transaction_state_desc,
r.wait_type,
r.blocking_session_id,
SUBSTRING(st.text, (r.statement_start_offset/2) + 1,
((CASE WHEN r.statement_end_offset = -1 THEN LEN(CONVERT(nvarchar(max), st.text)) * 2
ELSE r.statement_end_offset END - r.statement_start_offset)/2) + 1) AS current_statement
FROM sys.dm_tran_database_transactions dt
INNER JOIN sys.dm_tran_session_transactions st_tran ON dt.transaction_id = st_tran.transaction_id
INNER JOIN sys.dm_exec_sessions s ON st_tran.session_id = s.session_id
LEFT JOIN sys.dm_exec_requests r ON s.session_id = r.session_id
OUTER APPLY sys.dm_exec_sql_text(r.sql_handle) st
WHERE DATEDIFF(second, dt.transaction_begin_time, GETDATE()) > 30 -- Transactions over 30 seconds
AND s.is_user_process = 1
ORDER BY transaction_duration_seconds DESC;
-- Recommendations for common deadlock patterns:
-- 1. Access tables in consistent order across all transactions
-- 2. Use smallest possible transaction scope
-- 3. Consider SNAPSHOT isolation for read workloads
-- 4. Add covering indexes to reduce lock duration
-- 5. Use UPDLOCK hint for read-then-update patterns: SELECT * FROM table WITH (UPDLOCK) WHERE...
Configure snapshot isolation for read workloads Enables snapshot isolation to eliminate reader-writer deadlocks. This requires careful planning as it increases tempdb usage and can affect performance for write-heavy workloads.
-- Enable snapshot isolation - TEST THOROUGHLY before production deployment
-- This affects tempdb usage and may impact write performance
USE master;
GO
-- Check current snapshot isolation settings
SELECT
name,
snapshot_isolation_state,
snapshot_isolation_state_desc,
is_read_committed_snapshot_on
FROM sys.databases
WHERE name = 'YourDatabaseName';
-- Enable snapshot isolation (requires exclusive access)
-- WARNING: This may require application downtime
ALTER DATABASE YourDatabaseName SET ALLOW_SNAPSHOT_ISOLATION ON;
GO
-- Optional: Enable read committed snapshot (eliminates read locks entirely)
-- WARNING: This changes default transaction behavior for all connections
-- ALTER DATABASE YourDatabaseName SET READ_COMMITTED_SNAPSHOT ON;
-- GO
-- Monitor snapshot isolation overhead
SELECT
DB_NAME(database_id) AS database_name,
reserved_page_count * 8 / 1024 AS version_store_mb,
reserved_page_count,
committed_page_count,
used_page_count
FROM sys.dm_db_file_space_usage
WHERE database_id = DB_ID('YourDatabaseName');
AutoDBA generates fix scripts like these automatically, with impact estimates and rollback SQL included.
Prevention
Implement consistent table access ordering across all application transactions. Sessions that access Customer then Order tables should never reverse this pattern to access Order then Customer. This eliminates the most common deadlock scenarios.
Design covering indexes specifically for UPDATE and DELETE operations to minimize lock duration. Wide covering indexes reduce the need for key lookups that extend lock hold times. Monitor sys.dm_db_index_usage_stats to identify missing covering indexes on frequently updated tables.
Configure appropriate isolation levels based on workload characteristics. Use READ COMMITTED SNAPSHOT for mixed OLTP workloads where dirty reads are unacceptable. Reserve SNAPSHOT isolation for long-running reports that must see consistent data without blocking writers.
Implement connection pooling with proper transaction scoping. Applications that hold transactions across multiple round trips create extended lock durations. Use explicit transaction boundaries with the shortest possible scope.
Monitor tempdb version store growth when using snapshot-based isolation levels. Configure tempdb with multiple data files sized appropriately for version store overhead. In SQL Server 2019+, enable tempdb metadata memory optimization to reduce allocation contention.
Set up automated deadlock alerting using SQL Server Agent alerts on error 1205. Create maintenance plans that analyze deadlock graphs weekly to identify recurring patterns. Use Query Store in SQL Server 2016+ to correlate deadlock victims with execution plan changes.
Need hands-on help?
Dealing with persistent deadlock analysis issues across your environment? Samix Technology provides hands-on SQL Server performance consulting with 15+ years of production DBA experience.