To be honest, this is not something I ever thought I needed to know. Until one day, I found a database where transactions were failing due to ‘PRIMARY’ filegroup is full errors. So, I did what any DBA would have done, and checked if auto-growth was enabled. It was. So, why was the file no longer growing?
The answer wasn’t immediately clear, but I had a good idea what the issue was.
If we look in ‘sys.master_files’, we can see that ‘size’ refers to the number of 8k pages in the data file and is stored as a data type of INT.
If we do the math, that gives us a maximum of 2,147,483,648 8k pages per data file. That’s a lot of pages, but how much space would that be on disk?
Let’s do a little more math. 2,147,483,648 x 8 equals 17,179,869,184 KB. If we break that down further, it’s 16TB. So, there’s your answer, a data file cannot grow past 16TB.
Any ideas how big my data file was? You guessed it… 16TB!
That’s a lot of space, and hopefully you don’t have to manage a data file that size. But be aware, that if you have a data file nearing that limit and it’s the only data file configured, new writes to the database will fail once the file hits this size. If you’re in this scenario, you’ll want to go ahead and add an addition data file before you hit this limit.
This scenario can be even more confusing if FILL FACTOR is < 100 across all (or most) tables and indexes. There's still some free space in the data file, but it can't grow any larger. Lot's of page splits ensue, and performance plummets.
Great post. Thanks for sharing!
LikeLike
Great point. I don’t think most people understand the impact of lowering fill factor. In this case, a fill factor of 80% would basically equator to 3.2 of was wasted file and disk space.
LikeLike