torsdag 13 november 2008

Filtered indexes in SQL Server 2008

Ok, a great feature as it is... a great performance booster when applied correctly. I'm not gonna repeat what others have written, google it!

But what I really like about it is that is solves the age old problem of applying a uniqueness requirement to all but (and usually that is all but null values or something similar).

The classic solution to this problem is of course to use a trigger to verify uniqueness, to make that trigger run fast it requires an index. Instead of the mess of a complete index (larger than we need) and extra code in the form of a trigger that adds complexity and steals more performance we simply apply a filtered index.

Example, we want to make sure that socialid is unique if it is not null:
create table Person (id int identity(1,1) primary key, name nvarchar(100), socialid nvarchar(20))

insert into Person (name, socialid) values ('Albert', '1234'), ('Bengt', null), ('Christian', '2341'), ('Dan', null)

create unique index idx_socialid on Person (socialid) where socialid is not null

select * from Person

insert into Person (name, socialid) values ('Eric', '1234')

Inserting the person named Eric will fail since the socialid is not unique.

tisdag 24 juni 2008

StatementCompare

Since I often do optimization of procedures or other changes it is nice to be able to verify if a stored procedure returns the same data now as compared to another version. So I wrote a quick hack to make that comparison.

Checks select statements or multiple resultsets from stored procedures. Verifies order of tuples if checkbox option for that is checked. Doesn't verify return values or row counts from insert/update/delete.

Can be found here:
http://albert.nu/programs/statementcompare

torsdag 29 maj 2008

When to optimize

When getting an optimization task for someone I always try to start as wide as possibly. What type of machine are you running on? What is displayed by the task manager etc... Quickly start a trace to see what is actually happening and then view some DMV's to spot issues.

I've found that quite often the trace shows statements or procedures that by them selves eat up tons of resources. In case you find a top five of tasks that use in the range of 30% or more of cpu or io resources then optimize! Even if these tasks aren't causing locks or other problems it will be such a significant improvement to your client that they will notice.

As a side effect the entire database will run much smoother afterwards and with less load the risk of locks etc will decrease. These tasks are often easy to optimize since you will almost always benefit from trading insert/update/delete cost to select cost by removing or adding indexes.

torsdag 22 maj 2008

Performance monitor doesn't lie!

Yesterday I worked on optimizing a database for a client... I had a pretty nice optimization of a stored proceedure in place... that's what I thought at least. Using a computed column with an index I could do direct lookups instead of searching large amounts of text data and when testing the procedure in management studio the execution plan was perfect and IO was reduced from 50k to less than 50 :). But server load didn't drop as expected so I fired up performance monitor and started a trace... guess what the procedure was still running with poor performance.

After some tinkering it was obvious that the application code failed to get the correct execution plan for reasons still unknown (maybe access rights or some weird ansi-option related to the computed column, a did a simple workaround and it worked like out fine.

Lesson learned: There is a difference between successful optimization when running the procedure in management studio and actually applying it to a running application.

When in doubt trust performance monitor.

torsdag 15 maj 2008

Drop a database

The drop syntax is rarely useful since there is always someone using the database... and I always seem to forget the lines needed to set the database in single user mode.

This works:

ALTER DATABASE xxx SET SINGLE_USER WITH ROLLBACK IMMEDIATE

USE master

DROP DATABASE xxx

Really basic stuff...

Remember that:
@Variable like Column
Is not the same as:
Column like @Variable

Really basic, but sometimes even basic stuff can cause a mess.

fredag 4 april 2008

Comma separated string of values...

/*
Note that if the values that are supposed to be comma separated can contain
comma signs it is a good idea to replace those with something else before
generating the list
*/

declare @list nvarchar(max)
set @list = ''
select @list = @list + ',' + replace(ColumnName,',','.') from TableName
set @list = substring( @list , 2, 10000000 )
select @list

--Simpler versions when handling id's
--Quite quick
declare @list varchar(max)
set @list = ''
select @list = @list + ',' + convert(varchar(10), ColumnName) from TableName
set @list = substring( @list , 2, 10000000 )
select @list

--Much quicker version (notable only when generating very long lists)
declare @list varchar(max)
set @list = (select ',' + convert(varchar(10), ColumnName) from TableName for xml path(''))
set @list = substring(@list , 2, 10000000 )
select @list

måndag 14 januari 2008

Reseeding those identity fields...

DBCC CHECKIDENT ( table_name, [reseed / noreseed], [new_seed_value])

DBCC CHECKIDENT ( 'dbo.tTable', RESEED, 14000000)