Quick Answer
WRITELOG waits occur when SQL Server is waiting for transaction log writes to complete on disk. High WRITELOG waits indicate slow transaction log disk subsystem performance or excessive transaction log activity overwhelming I/O capacity. This is always concerning as it directly impacts transaction commit performance and overall database responsiveness.
Root Cause Analysis
WRITELOG waits occur in the Log Manager component when the Log Writer thread cannot flush log records to disk fast enough. When a transaction commits, SQL Server must guarantee the log records are physically written to the transaction log file before returning control to the application. The Log Writer thread manages this process through the Log Buffer, a memory structure that holds log records before they are written to disk.
The wait manifests when the Log Writer encounters one of several bottlenecks. Disk latency on the transaction log drive directly translates to WRITELOG duration. The Log Writer performs sequential writes, but if the underlying storage cannot sustain the required write throughput, transactions queue waiting for log flushes. Transaction log auto-growth events exacerbate this condition by forcing synchronous file expansion operations that block all log writes.
Large transactions generate substantial log records that must be flushed as a unit during commit processing. High transaction frequency can overwhelm even fast storage when the cumulative log generation rate exceeds the disk subsystem's sustained write capacity. The Log Writer uses group commit optimization to batch multiple transaction commits into single I/O operations, but this mechanism has limits.
In SQL Server 2016 and later, the Log Writer subsystem uses multiple log writer threads that operate in parallel against a single database transaction log, improving throughput on high-end hardware. Note that adding multiple log files to a database provides no performance benefit: SQL Server writes to the log sequentially and only uses additional log files when the primary log is full. SQL Server 2019 introduced improvements to log block sizing that can reduce I/O overhead for certain workloads. SQL Server 2022 enhanced group commit batching algorithms to better handle high-frequency OLTP scenarios.
The Write-Ahead Logging protocol requires log records to reach non-volatile storage before data page modifications are considered durable. This fundamental requirement means WRITELOG waits directly impact user-perceived transaction performance, making them critical to resolve quickly.
AutoDBA checks Transaction log file placement, sizing recommendations, and backup frequency optimization across your entire SQL Server instance in 60 seconds. Download the free diagnostic script and see what else needs attention.
Diagnostic Queries
-- Check current WRITELOG wait statistics and trends
SELECT
wait_type,
waiting_tasks_count,
wait_time_ms,
max_wait_time_ms,
signal_wait_time_ms,
wait_time_ms / NULLIF(waiting_tasks_count, 0) AS avg_wait_time_ms
FROM sys.dm_os_wait_stats
WHERE wait_type = 'WRITELOG'
ORDER BY wait_time_ms DESC;
-- Identify sessions currently waiting on WRITELOG
SELECT
s.session_id,
s.login_name,
s.program_name,
s.database_id,
DB_NAME(s.database_id) AS database_name,
r.wait_type,
r.wait_time,
r.wait_resource,
t.text AS current_statement
FROM sys.dm_exec_sessions s
INNER JOIN sys.dm_exec_requests r ON s.session_id = r.session_id
CROSS APPLY sys.dm_exec_sql_text(r.sql_handle) t
WHERE r.wait_type = 'WRITELOG'
ORDER BY r.wait_time DESC;
-- Analyze transaction log file performance metrics
SELECT
DB_NAME(database_id) AS database_name,
file_id,
name AS logical_name,
physical_name,
num_of_writes,
num_of_bytes_written,
io_stall_write_ms,
CASE WHEN num_of_writes = 0 THEN 0
ELSE io_stall_write_ms / num_of_writes END AS avg_write_latency_ms,
num_of_bytes_written / 1024 / 1024 AS mb_written
FROM sys.dm_io_virtual_file_stats(NULL, NULL)
WHERE file_id = 2 -- Transaction log files
ORDER BY io_stall_write_ms DESC;
-- Check for transaction log auto-growth events from the default trace
SELECT TOP 50
DatabaseName,
FileName,
StartTime,
EndTime,
Duration / 1000 AS duration_ms,
IntegerData * 8 / 1024 AS growth_mb, -- IntegerData is growth in 8KB pages
CASE EventClass
WHEN 92 THEN 'Data File Auto Grow'
WHEN 93 THEN 'Log File Auto Grow'
END AS event_description
FROM sys.fn_trace_gettable(
(SELECT path FROM sys.traces WHERE is_default = 1), DEFAULT
)
WHERE EventClass IN (92, 93) -- Data and Log File Auto Grow
ORDER BY StartTime DESC;
-- Examine active transactions that might be holding log space
SELECT
s.session_id,
s.login_name,
s.program_name,
DB_NAME(dt.database_id) AS database_name,
dt.database_transaction_begin_time,
dt.database_transaction_log_bytes_used / 1024 / 1024 AS log_mb_used,
dt.database_transaction_log_bytes_reserved / 1024 / 1024 AS log_mb_reserved,
at.transaction_begin_time,
DATEDIFF(second, dt.database_transaction_begin_time, GETDATE()) AS duration_seconds
FROM sys.dm_tran_active_transactions at
INNER JOIN sys.dm_tran_database_transactions dt ON at.transaction_id = dt.transaction_id
INNER JOIN sys.dm_tran_session_transactions st ON at.transaction_id = st.transaction_id
INNER JOIN sys.dm_exec_sessions s ON st.session_id = s.session_id
ORDER BY dt.database_transaction_log_bytes_used DESC;
Fix Scripts
Move transaction log to faster storage
-- Script to move transaction log file to SSD or faster storage
-- WARNING: This requires database downtime. Test in development first.
-- Replace paths with actual fast storage locations
-- Step 1: Take database offline
USE master;
ALTER DATABASE [YourDatabase] SET OFFLINE WITH ROLLBACK IMMEDIATE;
-- Step 2: Move the log file using operating system commands
-- Execute this via command line or PowerShell:
-- MOVE "C:\OldPath\YourDatabase_Log.ldf" "F:\FastStorage\YourDatabase_Log.ldf"
-- Step 3: Update SQL Server with new file location
ALTER DATABASE [YourDatabase]
MODIFY FILE (
NAME = 'YourDatabase_Log',
FILENAME = 'F:\FastStorage\YourDatabase_Log.ldf'
);
-- Step 4: Bring database back online
ALTER DATABASE [YourDatabase] SET ONLINE;
Pre-size transaction log to prevent auto-growth
-- Pre-grow transaction log to appropriate size based on workload
-- Calculate target size based on peak transaction volume
-- This prevents auto-growth during peak periods
USE [master];
-- Check current log usage first
SELECT
name,
size * 8 / 1024 AS current_size_mb,
CAST(FILEPROPERTY(name, 'SpaceUsed') * 8.0 / 1024 AS DECIMAL(10,2)) AS used_mb
FROM sys.master_files
WHERE database_id = DB_ID('YourDatabase') AND type = 1;
-- Pre-grow to prevent auto-growth (adjust size based on your workload)
ALTER DATABASE [YourDatabase]
MODIFY FILE (
NAME = 'YourDatabase_Log',
SIZE = 4096MB, -- Set appropriate size for your workload
FILEGROWTH = 512MB -- Set reasonable growth increment
);
Configure appropriate log backup frequency
-- Implement frequent transaction log backups to prevent log growth
-- This script creates a job for frequent log backups
-- WARNING: Requires SQL Server Agent. Adjust frequency based on RPO requirements.
USE [msdb];
-- Create transaction log backup job
EXEC dbo.sp_add_job
@job_name = N'Transaction Log Backup - YourDatabase',
@enabled = 1;
EXEC dbo.sp_add_jobstep
@job_name = N'Transaction Log Backup - YourDatabase',
@step_name = N'Backup Transaction Log',
@subsystem = N'TSQL',
@command = N'BACKUP LOG [YourDatabase]
TO DISK = N''C:\Backups\YourDatabase_Log_'' +
REPLACE(REPLACE(CONVERT(VARCHAR(20), GETDATE(), 120), '':'', ''''), '' '', ''_'') +
''.trn''
WITH COMPRESSION, INIT, CHECKSUM',
@retry_attempts = 3,
@retry_interval = 2;
-- Schedule every 15 minutes (adjust based on requirements)
EXEC dbo.sp_add_schedule
@schedule_name = N'Every 15 Minutes',
@freq_type = 4, -- Daily
@freq_interval = 1,
@freq_subday_type = 4, -- Minutes
@freq_subday_interval = 15;
EXEC dbo.sp_attach_schedule
@job_name = N'Transaction Log Backup - YourDatabase',
@schedule_name = N'Every 15 Minutes';
EXEC dbo.sp_add_jobserver
@job_name = N'Transaction Log Backup - YourDatabase';
Optimize transaction batching for bulk operations
-- Template for batching large operations to reduce log pressure
-- Use this pattern for large UPDATE, DELETE, or INSERT operations
-- This reduces lock duration and log flush frequency
DECLARE @BatchSize INT = 10000;
DECLARE @RowsProcessed INT = 1;
-- Process in batches to reduce transaction log pressure
WHILE @RowsProcessed > 0
BEGIN
BEGIN TRANSACTION;
-- Example: Delete old records in batches
DELETE TOP (@BatchSize) FROM YourLargeTable
WHERE CreatedDate < DATEADD(MONTH, -6, GETDATE());
SET @RowsProcessed = @@ROWCOUNT;
COMMIT TRANSACTION;
-- Small delay to allow log flushes and reduce system pressure
IF @RowsProcessed = @BatchSize
WAITFOR DELAY '00:00:01'; -- 1 second pause
END;
-- Alternative pattern for updates
/*
DECLARE @BatchSize INT = 5000;
DECLARE @RowsProcessed INT = 1;
WHILE @RowsProcessed > 0
BEGIN
UPDATE TOP (@BatchSize) YourTable
SET ProcessedFlag = 1
WHERE ProcessedFlag = 0;
SET @RowsProcessed = @@ROWCOUNT;
-- Allow transaction log to flush
IF @RowsProcessed = @BatchSize
WAITFOR DELAY '00:00:02';
END;
*/
AutoDBA generates fix scripts like these automatically, with impact estimates and rollback SQL included.
Prevention
Place transaction log files on dedicated fast storage separate from data files, preferably NVMe SSDs with low latency and high sustained write throughput. Configure write caching appropriately with battery-backed cache controllers to ensure data safety while maximizing performance.
Pre-size transaction log files to accommodate peak workload demands, eliminating auto-growth events during production hours. Set reasonable growth increments (fixed MB rather than percentages) to prevent large blocking growth operations.
Implement frequent transaction log backups in FULL recovery model databases to prevent excessive log growth. Backup frequency should align with your Recovery Point Objective but typically ranges from every 5-15 minutes for OLTP systems.
Monitor transaction log usage patterns and implement proper maintenance windows for index rebuilds and large data modifications. Batch large operations into smaller transactions to reduce individual log flush sizes and duration.
Configure appropriate backup compression and backup to fast storage to minimize backup operation impact on log disk subsystem. Consider backup striping for very large databases to parallelize backup I/O.
Establish baseline performance metrics for log write latency and throughput using Performance Monitor counters (LogicalDisk: Avg Disk sec/Write, SQLServer:Databases - Log Flushes/sec). Alert on deviations from normal patterns before they impact users.
For high-throughput OLTP systems, consider delayed durability for specific workloads where slight data loss risk is acceptable in exchange for significant performance gains. This reduces WRITELOG waits by batching log flushes asynchronously.
Use a single transaction log file per database, since SQL Server writes to log files sequentially and additional log files do not improve performance. Focus instead on placing the single log file on the fastest available storage. If using SAN storage, ensure adequate queue depth and multipathing configuration to maximize I/O performance to the log device.
Need hands-on help?
Dealing with persistent writelog issues across your environment? Samix Technology provides hands-on SQL Server performance consulting with 15+ years of production DBA experience.