Why Autoshrink is so bad

One of the interesting things about Microsoft SQL Server 2008 is the possibility to change a lot of properties on serverlevel and databaselevel so that you can configure the server as you want it.

One of these properties is the autoshrink property, and it can be found when you right click a database and choose Options.

As you can see, then it is the third option in the propertypage, and it is set to false. If I want autoshrink, then I can set it to true.

Setting it to true could start a process, where your database autoshrink, then it expands, then it autoshrinks and so on, and we don’t want that.

But what is the biggest problem when it comes to autoshrink? It is that it generates fragmentation, and that will degrade the performance dramatically.

Why? Let’s look at an example and let me explain. If you imagine that a database has two tables T1 and T2 and they both requires 7 datapages to store there data, and when we have created them, it looks like this:

You read the datafile from left to right, and that means if I want to scan table T2, then I start at T2-1 and end at T2-7. It will be easy for the SQL server to read it (please forget all about extents, Heaps, indexes e.g. This is just to get an understaning about what is happening). Actually now, i first read T2-1, t2-2, and I end at T2-7.

But now i want to delete talbe T1, so I drop it, and the datapages are released for T1, and now my datafile looks like this

I have 7 empty slots, and now we shrink the database, and what would be nice, was if the shrink will end up like this:

BUT it does not. It ends up like this:

It turns the table around, because the SQL Server is so clever (that was ironic) that it starts from the end of the datafile with the last page of T2 and put it in the first free slot, then it takes the second last (T2-6) and moves that into the second position and so on.

That causes a dramatically increase in fragmentation of the table.

Lets see that. First we setup the database

— CODE START —

use master
go

if DATABASEPROPERTYEX(‘TestAutoShrink’,’Version’)>0
    drop database TestAutoShrink
go

— REMEMBER TO ADD A FILELOCATION THAT IS VALID ON YOUR COMPUTER!!!!!

CREATE DATABASE [TestAutoShrink] ON  PRIMARY
( NAME = N’TestAutoShrink’, FILENAME = N’C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\TestAutoShrink.mdf’ , SIZE = 262144KB , FILEGROWTH = 1024KB )
 LOG ON
( NAME = N’TestAutoShrink_log’, FILENAME = N’C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\TestAutoShrink_log.ldf’ , SIZE = 262144KB , FILEGROWTH = 10%)
go

use TestAutoshrink
go

— CODE END —

Then we add two tables

— CODE START —

use TestAutoshrink
go

Create table Table1    (ID            int identity primary key,
                     Content    varchar(200)
                     )

Create table Table2    (ID            int identity primary key,
                     Content    varchar(200)
                     )
go

insert into Table1 Values(‘THIS IS A DEMONSTRATION OF AUTOSHRINK’)
go 10000

insert into Table2 Values(‘THIS IS A DEMONSTRATION OF AUTOSHRINK’)
go 10000

— CODE END–

Note the Go 10000. It means that it runs the batch 10.000 times. It is nice and easy, and a trick that Paul Randal has shown me on the Immersion Class in Seattle.

Lets see some statistics about. First we want to see the fragmentation and how many pages that the tables use.

— CODE START —

select    t1.object_id, t2.name, avg_fragmentation_in_percent as fragmentation,
        page_count
from sys.dm_db_index_physical_stats(DB_ID(),null,null,null,’DETAILED’) t1 inner join
        sys.objects t2 on t1.object_id=t2.object_id
where index_level=0

— CODE END —

So, what I can see is a fragmentation of approx 5% for each tables leaf level (you may get another figure, but it is at least a lov figure), and I can see that the table uses 68 pages.

Now, if you want you can see more info about these pages, by running DBCC IND. If i want to see DBCC IND for Table1, then I can do that:

— CODE START–

dbcc ind (‘testautoshrink’,’Table1′,-1)

— CODE END —

minus 1 kan be replaced with e.g. -2 or the indexnumber that you want to see.

What you can see is that it returns 70 rows, and the reason for that is that there is an Index Allocation Map (IAM-page) and an indexpage:

IAM pages has an Pagetype of 10, datapages a pagetype of 1 and indexpages a pagetype of 2.

NOW. Lets delete a table.

— CODE START —

Drop table Table1

— CODE END —

If you try to run DBCC IND again, then you will get an error.

OK. now lets do the shrink,

— CODE START —

USE [TestAutoShrink]
GO
DBCC SHRINKFILE (N’TestAutoShrink’ , 2)
GO

— CODE END —

And now the intersting part is. What did happen to our datafile and our table.

The datafile is easy. It is minimized in size, and is now 2mb big. But if we run the code from before and look at the infos for table one, we will end up with some very interesting figures.

— CODE START–
select    t1.object_id, t2.name, avg_fragmentation_in_percent as fragmentation,
        page_count
from sys.dm_db_index_physical_stats(DB_ID(),null,null,null,’DETAILED’) t1 inner join
        sys.objects t2 on t1.object_id=t2.object_id
where index_level=0
— CODE END–

Let’s see the output

WOW!!! What happened to the fragmentation? It increased dramatically, and now we need to do a rebuild to reduce the fragmentation.

To the once of you that does not know about fragmentation, then it short.. IT IS BAD!!! Of course you can find situations where you can argue that it is not bad, but in general.. IT IS BAD!!!

CONCLUSION: Unless you have a completely different dataflow in your database (e.g. if you drop a lot of data), then you should not shrink a database and of course you should also not autoshrink!

Søren

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s