onsdag 21 januari 2009

Use SQL Server trace to find problems on other servers!

The trace tool in SQL Server is mostly used to find performance problems related to SQL Server. But it is also a great tool to find performance problems in application servers and web servers.

The general way of thinking is this: Even if your SQL Server is doing fine what impact does the information flow have on other parts of the system? It is not unusual that the SQL Server is the most powerful machine in the system, allowing it to cope with huge loads of requests.

If you see either large amounts of requests to SQL Server or fewer requests but with large amounts of data try to figure out how the application manages this information. The life of an application server is not that simple, by knowing the SQL Server requests you can figure out what it is doing and maybe make it easier on both the application server and SQL Server.

torsdag 13 november 2008

Filtered indexes in SQL Server 2008

Ok, a great feature as it is... a great performance booster when applied correctly. I'm not gonna repeat what others have written, google it!

But what I really like about it is that is solves the age old problem of applying a uniqueness requirement to all but (and usually that is all but null values or something similar).

The classic solution to this problem is of course to use a trigger to verify uniqueness, to make that trigger run fast it requires an index. Instead of the mess of a complete index (larger than we need) and extra code in the form of a trigger that adds complexity and steals more performance we simply apply a filtered index.

Example, we want to make sure that socialid is unique if it is not null:
create table Person (id int identity(1,1) primary key, name nvarchar(100), socialid nvarchar(20))

insert into Person (name, socialid) values ('Albert', '1234'), ('Bengt', null), ('Christian', '2341'), ('Dan', null)

create unique index idx_socialid on Person (socialid) where socialid is not null

select * from Person

insert into Person (name, socialid) values ('Eric', '1234')

Inserting the person named Eric will fail since the socialid is not unique.

tisdag 24 juni 2008

StatementCompare

Since I often do optimization of procedures or other changes it is nice to be able to verify if a stored procedure returns the same data now as compared to another version. So I wrote a quick hack to make that comparison.

Checks select statements or multiple resultsets from stored procedures. Verifies order of tuples if checkbox option for that is checked. Doesn't verify return values or row counts from insert/update/delete.

Can be found here:
http://albert.nu/programs/statementcompare

torsdag 29 maj 2008

When to optimize

When getting an optimization task for someone I always try to start as wide as possibly. What type of machine are you running on? What is displayed by the task manager etc... Quickly start a trace to see what is actually happening and then view some DMV's to spot issues.

I've found that quite often the trace shows statements or procedures that by them selves eat up tons of resources. In case you find a top five of tasks that use in the range of 30% or more of cpu or io resources then optimize! Even if these tasks aren't causing locks or other problems it will be such a significant improvement to your client that they will notice.

As a side effect the entire database will run much smoother afterwards and with less load the risk of locks etc will decrease. These tasks are often easy to optimize since you will almost always benefit from trading insert/update/delete cost to select cost by removing or adding indexes.

torsdag 22 maj 2008

Performance monitor doesn't lie!

Yesterday I worked on optimizing a database for a client... I had a pretty nice optimization of a stored proceedure in place... that's what I thought at least. Using a computed column with an index I could do direct lookups instead of searching large amounts of text data and when testing the procedure in management studio the execution plan was perfect and IO was reduced from 50k to less than 50 :). But server load didn't drop as expected so I fired up performance monitor and started a trace... guess what the procedure was still running with poor performance.

After some tinkering it was obvious that the application code failed to get the correct execution plan for reasons still unknown (maybe access rights or some weird ansi-option related to the computed column, a did a simple workaround and it worked like out fine.

Lesson learned: There is a difference between successful optimization when running the procedure in management studio and actually applying it to a running application.

When in doubt trust performance monitor.

torsdag 15 maj 2008

Drop a database

The drop syntax is rarely useful since there is always someone using the database... and I always seem to forget the lines needed to set the database in single user mode.

This works:

ALTER DATABASE xxx SET SINGLE_USER WITH ROLLBACK IMMEDIATE

USE master

DROP DATABASE xxx

Really basic stuff...

Remember that:
@Variable like Column
Is not the same as:
Column like @Variable

Really basic, but sometimes even basic stuff can cause a mess.