Quick Answer
SQL Server Error 1105 indicates that a filegroup has exhausted all available space and cannot allocate additional extents for data or index operations. This occurs when all data files within the filegroup have reached their maximum size limits or the disk volumes hosting those files are full. This is a critical error that halts write operations to the affected database.
Root Cause Analysis
Error 1105 originates from SQL Server's space allocation manager when the engine attempts to allocate a new extent (8 contiguous 8KB pages) but finds no available space in any data file within the target filegroup. The allocation process follows this sequence: SQL Server first attempts to allocate from existing files with available space, then tries to auto-grow files if configured, and finally returns error 1105 if all allocation attempts fail.
The space allocation manager maintains internal structures tracking available space in each file through Global Allocation Map (GAM) and Shared Global Allocation Map (SGAM) pages. When these indicate no free extents and auto-growth fails due to maximum file size limits or insufficient disk space, the allocation request fails immediately.
SQL Server 2016 and later versions improved error messaging to include the specific filegroup name and database context. Prior versions provided less detailed error information, making troubleshooting more challenging. The underlying allocation mechanism remains consistent across versions, but SQL Server 2019 introduced improved space utilization tracking through sys.dm_db_file_space_usage for more granular monitoring.
Auto-growth failures manifest differently depending on configuration. If auto-growth is disabled, the error occurs immediately when existing space is exhausted. With auto-growth enabled but constrained by maximum file size or disk space, SQL Server attempts the growth operation before failing with 1105.
AutoDBA checks Database file sizing, auto-growth configuration, and space utilization monitoring across your entire SQL Server instance in 60 seconds. Download the free diagnostic script and see what else needs attention.
Diagnostic Queries
-- Check current space utilization for all filegroups
SELECT
DB_NAME() AS DatabaseName,
fg.name AS FileGroupName,
f.name AS FileName,
f.physical_name,
CAST(f.size * 8.0 / 1024 AS DECIMAL(10,2)) AS FileSizeMB,
CAST(f.max_size * 8.0 / 1024 AS DECIMAL(10,2)) AS MaxSizeMB,
CAST(FILEPROPERTY(f.name, 'SpaceUsed') * 8.0 / 1024 AS DECIMAL(10,2)) AS SpaceUsedMB,
CAST((f.size - FILEPROPERTY(f.name, 'SpaceUsed')) * 8.0 / 1024 AS DECIMAL(10,2)) AS FreeSpaceMB,
f.growth,
f.is_percent_growth
FROM sys.database_files f
INNER JOIN sys.filegroups fg ON f.data_space_id = fg.data_space_id
WHERE f.type_desc = 'ROWS';
-- Identify filegroups approaching capacity limits
SELECT
fg.name AS FileGroupName,
COUNT(f.file_id) AS FileCount,
SUM(CAST(f.size * 8.0 / 1024 AS DECIMAL(10,2))) AS TotalSizeMB,
SUM(CAST(FILEPROPERTY(f.name, 'SpaceUsed') * 8.0 / 1024 AS DECIMAL(10,2))) AS TotalUsedMB,
CAST(100.0 * SUM(FILEPROPERTY(f.name, 'SpaceUsed')) / SUM(f.size) AS DECIMAL(5,2)) AS PercentUsed
FROM sys.database_files f
INNER JOIN sys.filegroups fg ON f.data_space_id = fg.data_space_id
WHERE f.type_desc = 'ROWS'
GROUP BY fg.name
HAVING CAST(100.0 * SUM(FILEPROPERTY(f.name, 'SpaceUsed')) / SUM(f.size) AS DECIMAL(5,2)) > 85;
-- Check disk space on volumes hosting database files
SELECT DISTINCT
volume_mount_point,
file_system_type,
logical_volume_name,
CAST(total_bytes / 1024.0 / 1024 / 1024 AS DECIMAL(10,2)) AS TotalGB,
CAST(available_bytes / 1024.0 / 1024 / 1024 AS DECIMAL(10,2)) AS AvailableGB,
CAST(100.0 * available_bytes / total_bytes AS DECIMAL(5,2)) AS PercentFree
FROM sys.master_files mf
CROSS APPLY sys.dm_os_volume_stats(mf.database_id, mf.file_id)
WHERE mf.database_id = DB_ID();
-- Review auto-growth configuration and recent growth events
SELECT
f.name AS FileName,
f.growth AS GrowthSetting,
f.is_percent_growth,
f.max_size,
CASE f.max_size
WHEN -1 THEN 'Unlimited'
ELSE CAST(f.max_size * 8.0 / 1024 AS VARCHAR(20)) + ' MB'
END AS MaxSizeDescription
FROM sys.database_files f
WHERE f.type_desc = 'ROWS'
AND f.data_space_id IN (
SELECT fg.data_space_id
FROM sys.filegroups fg
WHERE fg.name IN (SELECT value FROM STRING_SPLIT('PRIMARY', ',')) -- Replace with affected filegroup
);
-- Find largest tables consuming space in affected filegroup
SELECT TOP 10
SCHEMA_NAME(t.schema_id) AS SchemaName,
t.name AS TableName,
i.name AS IndexName,
p.partition_number,
fg.name AS FileGroupName,
SUM(a.total_pages) * 8 / 1024 AS TotalSpaceMB,
SUM(a.used_pages) * 8 / 1024 AS UsedSpaceMB,
SUM(a.data_pages) * 8 / 1024 AS DataSpaceMB
FROM sys.tables t
INNER JOIN sys.indexes i ON t.object_id = i.object_id
INNER JOIN sys.partitions p ON i.object_id = p.object_id AND i.index_id = p.index_id
INNER JOIN sys.allocation_units a ON p.partition_id = a.container_id
INNER JOIN sys.filegroups fg ON a.data_space_id = fg.data_space_id
WHERE a.type IN (1, 3) -- IN_ROW_DATA and ROW_OVERFLOW_DATA
GROUP BY t.schema_id, t.name, i.name, p.partition_number, fg.name
ORDER BY SUM(a.total_pages) DESC;
Fix Scripts
Add new data file to filegroup (preferred solution)
-- Add additional data file with proper sizing and growth settings
-- Replace paths and sizing based on your environment
USE [master];
GO
ALTER DATABASE [YourDatabaseName]
ADD FILE (
NAME = 'YourDB_Data2',
FILENAME = 'C:\Data\YourDB_Data2.mdf', -- Use appropriate drive with sufficient space
SIZE = 1024MB, -- Start with reasonable initial size
MAXSIZE = 10240MB, -- Set appropriate maximum
FILEGROWTH = 256MB -- Use fixed growth, not percentage
)
TO FILEGROUP [PRIMARY]; -- Replace with target filegroup name
GO
Extend existing data file maximum size
-- Increase maximum size of existing data file
-- CAUTION: Ensure sufficient disk space exists before execution
USE [master];
GO
ALTER DATABASE [YourDatabaseName]
MODIFY FILE (
NAME = 'YourDB_Data', -- Use actual logical file name
MAXSIZE = 20480MB -- Increase maximum size appropriately
);
GO
Emergency space reclamation through index maintenance
-- Reclaim space through index reorganization and page compression
-- TEST IN DEV FIRST: This can be resource-intensive
USE [YourDatabaseName];
GO
-- Enable page compression on largest tables (SQL Server 2008+)
SELECT
'ALTER TABLE [' + SCHEMA_NAME(t.schema_id) + '].[' + t.name + '] REBUILD WITH (DATA_COMPRESSION = PAGE);' AS CompressSQL
FROM sys.tables t
INNER JOIN sys.indexes i ON t.object_id = i.object_id
INNER JOIN sys.partitions p ON i.object_id = p.object_id AND i.index_id = p.index_id
INNER JOIN sys.allocation_units a ON p.partition_id = a.container_id
WHERE i.index_id IN (0, 1) -- Heap or clustered index
AND p.data_compression = 0 -- Not already compressed
GROUP BY t.schema_id, t.name
HAVING SUM(a.total_pages) > 12800 -- Tables larger than 100MB
ORDER BY SUM(a.total_pages) DESC;
-- Execute the generated statements manually after review
Archive old data to reclaim space
-- Identify candidates for data archival based on date patterns
-- Review before implementing archival strategy
USE [YourDatabaseName];
GO
WITH TableSizes AS (
SELECT
SCHEMA_NAME(t.schema_id) AS SchemaName,
t.name AS TableName,
SUM(a.total_pages) * 8 / 1024 AS SizeMB,
t.object_id
FROM sys.tables t
INNER JOIN sys.indexes i ON t.object_id = i.object_id
INNER JOIN sys.partitions p ON i.object_id = p.object_id AND i.index_id = p.index_id
INNER JOIN sys.allocation_units a ON p.partition_id = a.container_id
WHERE i.index_id IN (0, 1)
GROUP BY t.schema_id, t.name, t.object_id
HAVING SUM(a.total_pages) * 8 / 1024 > 100 -- Tables larger than 100MB
)
SELECT
ts.SchemaName,
ts.TableName,
ts.SizeMB,
c.name AS PossibleDateColumn
FROM TableSizes ts
INNER JOIN sys.columns c ON ts.object_id = c.object_id
WHERE c.system_type_id IN (40, 41, 42, 43, 61) -- Date/datetime types
ORDER BY ts.SizeMB DESC;
AutoDBA generates fix scripts like these automatically, with impact estimates and rollback SQL included.
Prevention
Configure proactive space monitoring using SQL Server Agent alerts on performance counters "SQLServer:Databases - Percent Log Used" and custom scripts checking file space utilization. Set alert thresholds at 80% and 90% capacity to provide adequate response time before reaching critical levels.
Implement standardized auto-growth settings using fixed MB values rather than percentage growth to ensure predictable space allocation patterns. Percentage-based growth becomes increasingly slow and resource-intensive as files grow larger. Configure initial file sizes appropriately to minimize auto-growth events during normal operations.
Establish multiple data files per filegroup on separate physical drives when possible, sized equally to optimize parallel allocation performance. SQL Server's proportional fill algorithm distributes writes across files based on available space, improving both performance and space utilization.
Deploy automated space management scripts that add new files or extend existing files when utilization thresholds are exceeded. Include capacity planning reports that project growth trends based on historical patterns and business requirements.
Configure database mail notifications for space-related events and integrate with monitoring systems like SCOM or third-party solutions. Create custom extended events sessions to capture allocation failures and space pressure events for detailed analysis.
Review and optimize data retention policies regularly, implementing table partitioning for large historical datasets where appropriate. Consider archive strategies using readable secondaries or separate archive databases to maintain operational data at manageable sizes.
Need hands-on help?
Dealing with persistent sql server error 1105 issues across your environment? Samix Technology provides hands-on SQL Server performance consulting with 15+ years of production DBA experience.