Search This Blog

Friday, September 25, 2009

No connection could be made because the target machine actively refused it.

I faced this situation yesterday .
We were using Tivoli to connect to SQL Server 2005 (Clustered instance) to take backups.

But it was failing with the error (not printing the complee error here):
No connection could be made because the target machine actively refused it.

I asked myself why the server is refusing the client request ??
Since the connection was failing , this issue had to do with connectivity .

I started checking few things :

1) Port on which SQL is listening
2) named or default instance
3) Protocols enabled @ server level and @ client level .

It was a default instance on some port other than 1433 (as hackers can catch it easily).We found that it was SQL Server that was using this port ...how ??
By connecting to SQL Server through management studio using

tcp:servername\instancename , portname

and it connected .We also did netstat -aon and or -aonb(this takes a lot of time as it also finds the exe name) and confirmed that only SQL Server is using this port.

Then we checked SQL Server Client network utility (cliconfg) and found that no protocols were enabled .We enabled named pipes and TCP/IP (not sure how it got disabled).

Tried to connect tivoli which failed again .Finally we forcibly made tivoli to connect to SQL Server usin gthat port by creating a TCP/IP alias ...
This worked and our issue got resolved ..

In past I found that the port SQL Server is using is either blocked or being used by other application .I that case stop that application and create the alias .

Hope this helps !!!

Regards
Abhay

Saturday, September 12, 2009

Recreating Builtin/administrator account in SQL Server 2005

Someone at client's site removed the builtin admin from SQL Server as it was one of the risks mentioned in the agreement .Everything was fine .

However,during maintenance SQL server did not come up after the server reboot .They tried it with SA but they also forgot the SA pasword :-) ....awesome ...
They were about to uninstall and reinstall SQL server when we finally did the steps below .It took me a lot of time but one of the options worked (with my previous MS experience ofcourse :) ).Below are the repro steps and the solution .




Repro of issue :

1) Delete the Builtin/administrator account .
2) tested it through sqlcmd and got the error 18456 Level 14 State 1.
3) assume that i have forgotten the SA password as well.


Solution:

1) Stop SQL Server service and start it with -m
2) go to C:\Program Files\Microsoft SQL Server\90\Tools\Binn through cmd prompt
3) type sqlcmd -E and hit enter .If its named instance then sqlcmd -
-SServer\instance -E and hit enter.
4) you will get > sign
5) commands you need to use
use master
go
CREATE LOGIN [BUILTIN\Administrators] FROM WINDOWS WITH DEFAULT_DATABASE=
[master], DEFAULT_LANGUAGE=[us_english]
go

6) You are done.Exit out of it .
7) restart you SQL Server service without -m parameter.

Happy learning .....

Thursday, September 10, 2009

Script to copy errors form error log and get it through mail .

This tim ei was asked to create job/SP that is capable to send a daily mail to DBAs with selected errors form error logs. Below are the steps to create a stored procedure to copy the errors from errorlogs to a table and then send the table as an attachment to our inbox .You might have to modify it as per your need .Or you can use WMI for SQL Server errorlogs ..I did not try that yet ...but will post it once done ...

for mailing you can also use CDO...will post that stuff soon :)






Steps included are :



1) Creating database mail account



2)Creating database mail profile



3) increasing the errorlogs to a default of 50 from 7



4) creating a table to store xp_readerrorlogs values



5) filtering the errors though a query



5) using database mail to send the output as an attachment to inbox



6) Finally recycling the errorlog











USE [msdb]

GO



/*Step 1:Creating a database mail account */

EXECUTE msdb.dbo.sysmail_add_account_sp

@account_name = 'Test1',

@description = 'Test account for Database Mail',

@email_address = 'your_email.com',

@display_name = 'Test1',

@mailserver_name = 'smtp..com'



/*Step 2:Creating a database mail profile */

EXECUTE msdb.dbo.sysmail_add_profile_sp

@profile_name = 'TestProfile',

@description = 'Test Profile for database mail'







/*Step 3: adding database mail account to database profile created earlier*/

EXECUTE msdb.dbo.sysmail_add_profileaccount_sp

@profile_name = 'TestProfile',

@account_name = 'Test1',

@sequence_number = 1



/*Step 4:To make the TestProfile we created a default public profile */



EXECUTE msdb.dbo.sysmail_add_principalprofile_sp

@profile_name = 'TestProfile',

@principal_name = 'default',

@is_default = 1 ;







/* To increase the number of error logs to 50 :

1: regedit >> HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\\MSSQLServer

2: create a REG_DWORD key 'NumErrorlogs' and set the value to 50

*/



/* Step 6:create a table for keeping the error log values */



USE [master]

GO

SET ANSI_NULLS ON

GO

SET QUOTED_IDENTIFIER ON

GO

SET ANSI_PADDING ON

GO



CREATE TABLE [dbo].[errorlog](

[logdate] [date] NULL,

[processinfo] [varchar](20) NULL,

[line] [varchar](1000) NULL

) ON [PRIMARY]



GO



SET ANSI_PADDING OFF

GO







/* Step 7: Create a stored proc for :

-> Truncating the table we created above .

-> Inserting the values from error log into the table we created above

-> Sending the mail .

-> cycling the errorlog

*/



Create proc [dbo].[ErrorlogEmail]

as

truncate table errorlog

insert into errorlog exec xp_readerrorlog

DECLARE @SQL varchar(2000)

DECLARE @date varchar (2000)

DECLARE @File varchar(1000)

select @date= convert(date,GETDATE())

SET @SQL = 'select * from errorlog where line like (''%Severity%'') or line like (''15 seconds'') or line like (''%latch%'')or line like (''%BEGIN STACK DUMP%'')or line like (''%time-out%'')and logdate = convert(date,GETDATE())'

SET @File = 'Errorlog report'+@date+'.csv'



EXECUTE msdb.dbo.sp_send_dbmail

@profile_name = 'test',

@recipients = 'your email.com',

@subject = 'Errorlog report',

@body = 'Attached please find the Daily errorlog report',

@query =@SQL ,

@attach_query_result_as_file = 1,

@query_attachment_filename = @file,

@query_result_header = 1,

@query_result_separator = ' ',

@query_result_no_padding = 1,

@query_result_width = 32767



/* Step 8: recycle errorlog */

EXEC master.sys.sp_cycle_errorlog

print 'Error log recycled'



Hope this helps ...

Happy Learning ...

Wednesday, September 2, 2009

WMI alert that can detect any alteration to the SP [SQL 2005 and above]

This is the WMI alert that can detect any alteration to the Stored Procedure .Its going to tell you the date , login name , user name , object name .
Like this we can create the alert on anything you want .As I said in my previous post ,WMI is very flexible , uses less resources and much faster .


/* Step 1: creating the table to capture the Alter Proc information */

USE adventureworks
GO

IF EXISTS (SELECT * FROM dbo.sysobjects WHERE id = OBJECT_ID(N'[dbo].[Alterprocevents]') AND OBJECTPROPERTY(id, N'IsUserTable') = 1)
DROP TABLE [dbo].[Alterprocevents]
GO

CREATE TABLE [dbo].[Alterprocevents] (
[AlertTime] [datetime] NOT NULL ,
[Object_name] varchar(100),
[Login_Name] varchar(100),
[user_name] varchar(100),
[RecordID] [int] IDENTITY (1,1) NOT FOR REPLICATION NOT NULL,
[Flag] [int] NOT NULL CONSTRAINT [DF_AlterprocEvents_Flag] DEFAULT ((0))
) ON [PRIMARY]
GO

--CREATE INDEX [Alterproc_IDX01] ON [dbo].[Alterprocevents]([AlertTime]) WITH FILLFACTOR = 100 ON [PRIMARY]
--GO

/*Step 2 : Creating the Job that will enter values into the Alterprocevents table created above*/
/*Error handling is also added and we are running it in a transaction*/
USE [msdb]
GO

IF EXISTS (SELECT job_id FROM msdb.dbo.sysjobs_view WHERE name = N'Capture Alter proc event')
EXEC msdb.dbo.sp_delete_job @job_name = N'Capture Alter proc event', @delete_unused_schedule=1

GO

BEGIN TRANSACTION
DECLARE @ReturnCode INT
SELECT @ReturnCode = 0

IF NOT EXISTS (SELECT name FROM msdb.dbo.syscategories WHERE name=N'[Uncategorized (Local)]' AND category_class=1)
BEGIN
EXEC @ReturnCode = msdb.dbo.sp_add_category @class=N'JOB', @type=N'LOCAL', @name=N'[Uncategorized (Local)]'
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
END

DECLARE @jobId BINARY(16)
EXEC @ReturnCode = msdb.dbo.sp_add_job @job_name=N'Capture Alter proc event',
@enabled=1,
@notify_level_eventlog=2,
@notify_level_email=3,
@notify_level_netsend=0,
@notify_level_page=0,
@delete_level=0,
@description=N'Job for responding to Capture Alter proc event events',
@category_name=N'[Uncategorized (Local)]',
@job_id = @jobId OUTPUT

IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback

/*Step 3: Insert graph into LogEvents*/

EXEC @ReturnCode = msdb.dbo.sp_add_jobstep @job_id=@jobId, @step_name=N'Insert graph into LogEvents',
@step_id=1,
@cmdexec_success_code=0,
@on_success_action=1,
@on_success_step_id=0,
@on_fail_action=2,
@on_fail_step_id=0,
@retry_attempts=0,
@retry_interval=0,
@os_run_priority=0, @subsystem=N'TSQL',
@command=N'

INSERT INTO [dbo].[Alterprocevents](
AlertTime,
object_name,
login_name,
user_name
)
VALUES (
GETDATE(),
N''$(ESCAPE_NONE(WMI(objectname)))'',
N''$(ESCAPE_NONE(WMI(loginname)))'',
N''$(ESCAPE_NONE(WMI(username)))''
)',
@database_name=N'adventureworks',
@flags=0

IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_update_job @job_id = @jobId, @start_step_id = 1
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_add_jobserver @job_id = @jobId, @server_name = N'(local)'
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
COMMIT TRANSACTION

GOTO EndSave

QuitWithRollback:
IF (@@TRANCOUNT > 0) ROLLBACK TRANSACTION
EndSave:
GO

/* Step 4: Creating the alert and associating it with the Job to be fired */

USE [msdb]
GO

IF EXISTS (SELECT name FROM msdb.dbo.sysalerts WHERE name = N'Respond to alterproc_change')
EXEC msdb.dbo.sp_delete_alert @name=N'Respond to alterproc_change'

GO

DECLARE @server_namespace varchar(255)
IF ISNULL(CHARINDEX('\', @@SERVERNAME), 0) > 0
SET @server_namespace = N'\\.\root\Microsoft\SqlServer\ServerEvents\' + SUBSTRING(@@SERVERNAME, ISNULL(CHARINDEX('\', @@SERVERNAME), 0) + 1, LEN(@@SERVERNAME) - ISNULL(CHARINDEX('/', @@SERVERNAME), 0))
ELSE
SET @server_namespace = N'\\.\root\Microsoft\SqlServer\ServerEvents\MSSQLSERVER'

EXEC msdb.dbo.sp_add_alert @name=N'Respond to alterproc_change',
@wmi_namespace=@server_namespace,
@wmi_query=N'SELECT * FROM DDL_PROCEDURE_EVENTS',
@job_name='Capture Alter proc event' ;
GO


Hope you like it .
Regards
Abhay

WMI alert for deadlocking (no need to configure trace flags 1222) : For SQL 2005 and above only

Before I proceed forward , I just wanted to say that scriting through WMI is amazing ..You will take some time in learning it and might get bored with syntax ...But believe me it will catch your interest as anything is posible wih it ..

Here is the WMI script to trace the deadlock .Its only for SQL Server 2005 and above as I have to see if SQL Server 2000 supports WMI namespace by default.
This also has the mailing script , which we can use for all kinds of alerts .WMI scrips are faster and less resource consuming .I have also added error handling and kept everything important in a transaction , so that it rollsback completely .This is for one database but we can configure for as many by only adding more alerts which will be using the same job.

IMP : Its not going to work on RTM as there is a Bug where the job fails giving the incorrect syntax error (actualy it parses successfully).I applied SP3 and it worked .So not sure if it works on SP1 and SP2 .

Part 1 :

1) Creating the table to capture the deadlock information.
2) Creating the Job that will enter values into the Deadlockevents table created above.
3) Insert graph into LogEvents.
4) Creating the alert and associating it with the Job to be fired.
5) Create a stored proc for sending the deadlock information as .CSV file through the mail.
6) Changing the flag to 1 so that next time this information is not sent.


/* Step 1: creating the table to capture the deadlock information */

USE
GO

IF EXISTS (SELECT * FROM dbo.sysobjects WHERE id = OBJECT_ID(N'[dbo].[DeadlockEvents]') AND OBJECTPROPERTY(id, N'IsUserTable') = 1)
DROP TABLE [dbo].[DeadlockEvents]
GO

CREATE TABLE [dbo].[DeadlockEvents] (
[AlertTime] [datetime] NOT NULL ,
[DeadlockGraph] [xml],
[RecordID] [int] IDENTITY (1,1) NOT FOR REPLICATION NOT NULL,
[Flag] [int] NOT NULL CONSTRAINT [DF_DeadlockEvents_Flag] DEFAULT ((0))
) ON [PRIMARY]
GO

CREATE INDEX [DeadlockEvents_IDX01] ON [dbo].[DeadlockEvents]([AlertTime]) WITH FILLFACTOR = 100 ON [PRIMARY]
GO

/*Step 2 : Creating the Job that will enter values into the Deadlockevents table created above*/
/*Service account and sql operator option are optional*/
/*Error handling is also added and we are running it in a transaction*/
USE [msdb]
GO

IF EXISTS (SELECT job_id FROM msdb.dbo.sysjobs_view WHERE name = N'Capture Deadlock Graph')
EXEC msdb.dbo.sp_delete_job @job_name = N'Capture Deadlock Graph', @delete_unused_schedule=1

GO
DECLARE @ServiceAccount varchar(128)
SET @ServiceAccount = N''
DECLARE @SQLOperator varchar(128)
SET @SQLOperator = N''

BEGIN TRANSACTION
DECLARE @ReturnCode INT
SELECT @ReturnCode = 0

IF NOT EXISTS (SELECT name FROM msdb.dbo.syscategories WHERE name=N'[Uncategorized (Local)]' AND category_class=1)
BEGIN
EXEC @ReturnCode = msdb.dbo.sp_add_category @class=N'JOB', @type=N'LOCAL', @name=N'[Uncategorized (Local)]'
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
END

DECLARE @jobId BINARY(16)
EXEC @ReturnCode = msdb.dbo.sp_add_job @job_name=N'Capture Deadlock Graph',
@enabled=1,
@notify_level_eventlog=2,
@notify_level_email=3,
@notify_level_netsend=0,
@notify_level_page=0,
@delete_level=0,
@description=N'Job for responding to DEADLOCK_GRAPH events',
@category_name=N'[Uncategorized (Local)]',
@owner_login_name=@ServiceAccount,
@notify_email_operator_name=@SQLOperator, @job_id = @jobId OUTPUT

IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback

/*Step 3: Insert graph into LogEvents*/

EXEC @ReturnCode = msdb.dbo.sp_add_jobstep @job_id=@jobId, @step_name=N'Insert graph into LogEvents',
@step_id=1,
@cmdexec_success_code=0,
@on_success_action=1,
@on_success_step_id=0,
@on_fail_action=2,
@on_fail_step_id=0,
@retry_attempts=0,
@retry_interval=0,
@os_run_priority=0, @subsystem=N'TSQL',
@command=N'

INSERT INTO DeadlockEvents (
AlertTime,
DeadlockGraph
)

VALUES (
GETDATE(),
N''$(ESCAPE_NONE(WMI(TextData)))''
)',
@database_name=N'master',
@flags=0

IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_update_job @job_id = @jobId, @start_step_id = 1
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_add_jobserver @job_id = @jobId, @server_name = N'(local)'
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
COMMIT TRANSACTION

GOTO EndSave

QuitWithRollback:
IF (@@TRANCOUNT > 0) ROLLBACK TRANSACTION
EndSave:
GO

/*Creating the alert and associating it with the Job to be fired */

USE [msdb]
GO

IF EXISTS (SELECT name FROM msdb.dbo.sysalerts WHERE name = N'Respond to DEADLOCK_GRAPH')
EXEC msdb.dbo.sp_delete_alert @name=N'Respond to DEADLOCK_GRAPH'

GO

DECLARE @server_namespace varchar(255)
IF ISNULL(CHARINDEX('\', @@SERVERNAME), 0) > 0
SET @server_namespace = N'\\.\root\Microsoft\SqlServer\ServerEvents\' + SUBSTRING(@@SERVERNAME, ISNULL(CHARINDEX('\', @@SERVERNAME), 0) + 1, LEN(@@SERVERNAME) - ISNULL(CHARINDEX('/', @@SERVERNAME), 0))
ELSE
SET @server_namespace = N'\\.\root\Microsoft\SqlServer\ServerEvents\MSSQLSERVER'

EXEC msdb.dbo.sp_add_alert @name=N'Respond to DEADLOCK_GRAPH',
@wmi_namespace=@server_namespace,
@wmi_query=N'SELECT * FROM DEADLOCK_GRAPH',
@job_name='Capture Deadlock Graph' ;
GO


/* Step 5: Create a stored proc for sending the deadlock information as .CSV file */

Create proc [dbo].[Deadlock_rpt]
as
DECLARE @SQL varchar(2000)
DECLARE @date varchar (2000)
DECLARE @File varchar(1000)
select @date= convert(date,GETDATE())
SET @SQL = 'select * from deadlockevents where flag = 0'
SET @File = 'Deadlock report'+@date+'.csv'

EXECUTE msdb.dbo.sp_send_dbmail
@profile_name = 'test',
@recipients = 'your email.com',
@subject = 'Deadlock report',
@body = '***URGENT***Attached please find the deadlock report',
@query =@SQL ,
@attach_query_result_as_file = 1,
@query_attachment_filename = @file,
@query_result_header = 1,
@query_result_separator = ' ',
@query_result_no_padding = 1,
@query_result_width = 32767


/* Step 6: Changing the flag to 1 so that next time this information is not sent*/
update dbo.DeadlockEvents set flag = 1 where flag = 0
go


Part 2:

1) Creating the DbMmail account
2) Creating a DbMail profile.
3) Adding account to profile.
4) making the profile as public.


USE [msdb]
GO

/*Step 1:Creating a database mail account */
EXECUTE msdb.dbo.sysmail_add_account_sp
@account_name = 'Test1',
@description = 'Test account for Database Mail',
@email_address = 'your_email.com',
@display_name = 'Test1',
@mailserver_name = 'smtp..com'

/*Step 2:Creating a database mail profile */
EXECUTE msdb.dbo.sysmail_add_profile_sp
@profile_name = 'TestProfile',
@description = 'Test Profile for database mail'



/*Step 3: adding database mail account to database profile created earlier*/
EXECUTE msdb.dbo.sysmail_add_profileaccount_sp
@profile_name = 'TestProfile',
@account_name = 'Test1',
@sequence_number = 1

/*Step 4:To make the TestProfile we created a default public profile */
EXECUTE msdb.dbo.sysmail_add_principalprofile_sp
@profile_name = 'TestProfile',
@principal_name = 'default',
@is_default = 1 ;

How to drop all the user databases in one GO

It was Saturday Morning and I had a task @ hand .
One of my collegues wanted me to write a script to drop all the user databases at one go ..

I was initially reluctant as I thought any DDL can not take variables .But then I tried with some other stuff which did not use variables with DMLs.

Initially I came up with the script below :

Create table drop_db(name sysname)
Insert into drop_db select name from sysdatabases where dbid >4
Select 'drop database ' + name from db_name

Copy the output and execute.You canuse the same trick for other purposes as well .

But then I wanted to give it some more try and came up with the final script .

Drop table #db_name
Go
Create table #db_drop (name sysname,flag int identity (1,1) )
Insert into #db_drop select name from sysdatabases where dbid>4
Declare @@flag1 int
Select @@flag1 =max (flag) from #db_drop

While @@flag1 >0
Begin
Declare @string nvarchar (30)
Declare @dbname sysname
Select @dbname =name from #db_drop where flag=@@flag1
Print @dbname + ' is being deleted'
Set @string =' drop database ' + @dbname
Execute sp_executesql @string
Set @@flag1 =@@flag1 +1
End
Drop table #db_drop

Tuesday, May 19, 2009

Useful tools from Sysinternals

I was searching a tool capable of showing the Server enviornment on the desktop ..
Well its indeed BigInfo :) .You will find it on Sysinternal site .
Infact there are a lot of other useful tools that you can use ..or simply download the complete suit ..
Here you go ..

http://technet.microsoft.com/en-us/sysinternals/bb897557.aspx
Enjoy (",)

Monday, May 18, 2009

How to see the SSIS packages for the Named Instance (2005/2008) of SQL Server .

-> Go to C:\Program Files\Microsoft SQL Server\100\DTS\Binn [this is SQL Server 2008 but the path would be same for 2005 as well .instead of 100 it will be 90].
-> Find the file MsDtsSrvr.ini.xml and open it in notepad .
-> You will see 10-12 XML format lines .
-> By default you will see : .
-> Remove the DOT (.) and replace it with the server name \instance name .
-> Restart the integertion service from Services console .

Hope this helps .

Monday, April 27, 2009

Manually Add Full Text Search Resources For A SQL 2005 Failover Cluster

Reciently, I Faced this issue where the customer did not install the FT search initially and later on they came for help since it was a cluster.so I thought to share it here .Others are most welcome to Edit or Add ..

-> First you need to check if FT search service is installed (in case its there and someone deleted the resource from cluadmin)...if its not there in services or configuration manager , install it through the setup .

->In the Cluster Administrator, right-click on 'Resources' and choose New -> Resource

->In the New Resource dialog, choose 'Generic Service' for the type and give it a
name like SQL FULLTEXT. Any name is fine. Choose the desired group for the resource.

->In the next screen choose the nodes that the Full-text resource can run on.These should be the same as your SQL Server resource.

->In the next screen don't do anything. Full-text search doesn't need any dependencies.

->In the Generic Service Parameters screen type msftesql$ in the
'Service Name' box, where is the Virtual Server name for the SQL
Server. You can verify this name by opening Services.msc (Start -> Run -> type
'Services.msc' -> OK) and right-clicking on the SQL ServerFulltext Search service
and choosing properities. Us the Service Name in the properties dialog as the
Service Name in the New Resource dialog.

->In the 'Startup Parameters' box, enter the parameters shown in the service
property dialog (Start -> Run -> type 'Services.msc' -> OK -> right-click on SQL
ServerFulltext Search service -> choose 'Properties) at the end of the path in the
'Path to executable' box. For example: -s:MSSQL.2 -f:INST2

-> In the next screen don't do anything. Leave the 'Registry Replication'
information blank.

-> Click finish. The Full-text resource should be visible in your SQL Server group
in the Cluster Administrator.

-> Test by bringing the Full-text resource online and failing it over to the other
node.



Regards
Abhay

Friday, April 10, 2009

Perfmon Counter recommended values for SQL Server 2005

Below are the perfmon counter benchmarks that I generally refer to :


Memory
Available Mbytes
> 100MB

Paging File
%Usage
< 70%

Process (sqlservr)
%Privileged Time
< 30% of %Processor Time (sqlservr)

Processor
%Privileged Time
< 30% of Total %Processor Time

PhysicalDisk
Avg. Disk Sec/Read
< 8ms

PhysicalDisk
Avg. Disk sec/Write
< 8ms (non cached) < 1ms (cached)

SQLServer:Access Methods
Forwarded Records/sec
< 10 per 100 Batch Requests/Sec

SQLServer:Access Methods
FreeSpace Scans/sec
<10 per 100 Batch Requests/Sec

SQLServer:Access Methods
Full Scans / sec
(Index Searches/sec)/(Full Scans/sec) > 1000

SQLServer:Access Methods
Workfiles Created/Sec
< 20 per 100 Batch Requests/Sec

SQLServer:Access Methods
Worktables Created/Sec
< 20 per 100 Batch Requests/Sec

SQL Server:Buffer Manager
Buffer Cache hit ratio
> 90%

SQL Server:Buffer Manager
Free list stalls/sec
< 2

SQL Server:Buffer Manager
Lazy Writes/Sec
< 20

SQL Server:Buffer Manager
Page Life Expectancy
> 300

SQLServer:Buffer Manager
Page lookups/sec
(Page lookups/sec) / (Batch Requests/sec) < 100

SQL Server:Locks
Lock Requests/sec
(Lock Request/sec)/(Batch Requests/sec) < 500

SQLServer:SQL Statistics
SQL Compilations/sec
< 10% of the number of Batch Requests/Sec

SQLServer:SQL Statistics
SQL Re-Compilations/sec
< 10% of the number of SQL Compilations

How to find how Cache/Bpool is being used and what objects are using the cache/Bpool ..

1: To determine what plans are in the cache and how often they're used we can use
sys.dm_os_memory_cache_counters dm view .



SELECT TOP 6
LEFT([name], 20) as [name],
LEFT([type], 20) as [type],
[single_pages_kb] + [multi_pages_kb] AS cache_kb,
[entries_count]
FROM sys.dm_os_memory_cache_counters
order by single_pages_kb + multi_pages_kb DESC



Here :

CACHESTORE_OBJCP are compiled plans for stored procedures, functions and triggers.

CACHESTORE_SQLCP are cached SQL statements or batches that aren't in stored procedures, functions and triggers.This includes any dynamic SQL or raw SELECT statements sent to the server.

CACHESTORE_PHDR These are algebrizer trees for views, constraints and defaults. An algebrizer tree is the parsed SQL text that resolves the table and column names.

You will find these counters in DBCC Memorystatus as well.Infact DBCC Memory Status uses this dmv.This will give us idea about which cache is getting filled up .

Generally, you will find that CACHESTORE_SQLCP > CACHESTORE_OBJCP(because we cannot frame everything in to SPs) , but if the ratio of one to another is very high then we can say that there are more adhoc plans being run then Stored procedures.

You can also monitor the number of data pages in the plan cache using Performance Monitor (PerfMon) using SQLServer:Plan Cache object with the Cache Pages counter.There are instances for SQL Plans (CACHESTORE_SQLCP), Object Plans (CACHESTORE_OBJCP) and Bound Trees (CACHESTORE_PHDR). This will give you the same picture ..for e.g. under bound tree : multiply cache pages by 8. you will get the same output as in dbcc memorystatus and the dm we used above.



2 : After this to know the querry are being cached and used , we can use
sys.dm_exec_cached_plans and sys.dm_exec_sql_text dm views.

select TOP 50
objtype,
usecounts,
p.size_in_bytes/1024 'IN KB',
LEFT([sql].[text], 100) as [text]
from sys.dm_exec_cached_plans p
outer apply sys.dm_exec_sql_text (p.plan_handle) sql
ORDER BY usecounts DESC

and then we can check the query plans(and size) for the one we have some doubt .

3: To find what tables and indexes are in the buffer memory of your server you can use sys.dm_os_buffer_descriptors DMV.The query below can give us total currrent size of buffer pool .


select count(*) AS Buffered_Page_Count
,count(*) * 8192 / (1024 * 1024) as Buffer_Pool_MB
from sys.dm_os_buffer_descriptors



After we have found the Bufferpool size , we can see which database is using more memory by runnig the query below .



SELECT LEFT(CASE database_id
WHEN 32767 THEN 'ResourceDb'
ELSE db_name(database_id)
END, 20) AS Database_Name,
count(*)AS Buffered_Page_Count,
count(*) * 8192 / (1024 * 1024) as Buffer_Pool_MB
FROM sys.dm_os_buffer_descriptors
GROUP BY db_name(database_id) ,database_id
ORDER BY Buffered_Page_Count DESC

And then we can go further at object level to see what all objects are consuming memory (and how much) .We can use the query below in each database we wish to :

SELECT TOP 25
obj.[name],
i.[name],
i.[type_desc],
count(*)AS Buffered_Page_Count ,
count(*) * 8192 / (1024 * 1024) as Buffer_MB
-- ,obj.name ,obj.index_id, i.[name]
FROM sys.dm_os_buffer_descriptors AS bd
INNER JOIN
(
SELECT object_name(object_id) AS name
,index_id ,allocation_unit_id, object_id
FROM sys.allocation_units AS au
INNER JOIN sys.partitions AS p
ON au.container_id = p.hobt_id
AND (au.type = 1 OR au.type = 3)
UNION ALL
SELECT object_name(object_id) AS name
,index_id, allocation_unit_id, object_id
FROM sys.allocation_units AS au
INNER JOIN sys.partitions AS p
ON au.container_id = p.hobt_id
AND au.type = 2
) AS obj
ON bd.allocation_unit_id = obj.allocation_unit_id
LEFT JOIN sys.indexes i on i.object_id = obj.object_id AND i.index_id = obj.index_id
WHERE database_id = db_id()
GROUP BY obj.name, obj.index_id , i.[name],i.[type_desc]
ORDER BY Buffered_Page_Count DESC

Tuesday, January 6, 2009

Setting CPU Priority level for a SQL Server (high/ low)

SQL Server by default runs at priority base 7 , which is normal .
To raise it to 13 we can use Priority boost in sp_configure .

But reciently , I got a question if we can set the CPU priority to lower than normal.There is no option for this inside SQL Server.

So there was in inderect answer to this : use affinity mask , so that SQL is restricted to particular CPU/CPUs .But thats not the correct answer .

So is there a way to do that via task manager .Well , it has the option but we cant as we get some access denied error

What you can do is :

1) Open the DOS window >>Start /Low "\Binn\sqlservr.exe"
2) This will open another window .
3) In that new window type : "C:\Program Files\Microsoft SQL Server\MSSQL.2\MSSQL\Binn\sqlservr.exe"[path of sqlserver executable for an instance] -sMSSQLSERVER <--instance name [copy it from the services dialog box]
4) connect to the management studio or OSQL/ISQL.
5) cross check in task manager .You will see that SQL Server is running under low prority
6) Now from the task manager you can change the priority ...not sure why :) ...

Hope this helps .......

Saturday, October 25, 2008

A significant part of sql server process memory has been paged out.

First of all :
There is no solution to this issue in Windows 2003 (You might have lost the confidence after reading this :) ...read it fully).


This is the sample error that we get :
2007-01-23 16:30:10.14 spid1s A significant part of sql server process memory has been paged out. This may result in a performance degradation. Duration: 0 seconds. Working set (KB): 1086400, committed (KB): 2160928, memory utilization: 50%.

Lets try to understand this error first :

Duration : the time or interval or gap after which this error has been reported. Here 0 means that this is the first time on that day , that this error occured.You will see a fixed pattern of duration .

Working set :The memory pages kept in the RAM for the use of process/threads.Remember that this is set approximately by the OS and its the OS that can trim it or add to it if it see that too many pages ar ein RAM for a process and it needs to give minimum bare pages to some other process which it cannot .In our case its roughly 1 GB . Check wikipedia for a better definition (http://en.wikipedia.org/wiki/Working_set)

Committed : The actual memory committed by SQL Server .NO one should be able to take this memory from SQL Server or deallocate it .Not even the OS .But it can be paged :) by the OS and thats what it does.

Memory Utilization : the actual memory utilization by SQL Server after its working set is paged out .In this case its 50 which means the remaining 50 has been paged out .Had it been 2% then the remaining 98% is paged out .This error will start apperaing once the working set of SQL Server trims equal to or more than 50%

Now , one can easily deduce that 2GB RAM was committed [see commited in the error] but only 1 GB was the working set (see the error again) .So 50% is the current working set as compared to its committed value .Remaining 50% in in the disk and hence SQL Server will have to go for Hard page fault in case the page its looking for is on the disk .

Many a times you will see that on a cluster it will simply failover .This is because of the same effect .


Why it happens ??

Every process needs A contiguous chunk of memory but memory is largely fragmented ...
Some EXEs needs large chunks of contiguous memory .So even if there is enough memory available , its fragmented .And thats the cause of this issue .

When troubleshooting these type of issues, the key is to find out why the physical memory on the server gets exhausted and creates a situation that forces sql server to scale its memory usage back. As long as the available memory is below the limits for which Windows keeps sending low memory notification, sql server has no option but to scale itself back to avoid poor performance.



More on the dynamic memory management capabilities of sql server is discussed in the topic http://msdn2.microsoft.com/en-us/library/ms178145.aspx

There are a few situations where SQL Server Process working set might be paged out by Windows despite these memory resource notification mechanism.


-> If windows is not sending the correct notifications to all listening processes at the right moment and thresholds.

-> If SQL Server is not responding fast enough to the low memory resource notification from Windows.

-> Conditions in Windows where working sets of all processes are trimmed.

-> Windows might decide to trim a certain percentage of working set of various or specific processes.

-> Incorrect usage of memory management functions by Device Drivers.

SQL Server memory management is designed to dynamically adjust the committed memory based on the amount of available memory on the system. SQL Server does this to avoid being paged out by the OS memory manager with the help of some APIs.

SQL Server registers with the memory resource notifications of Windows. If a low memory notification comes from Windows, sql server scales its memory usage down. When windows sends the high memory notification, sql server can grow its memory usage target. The Win32 API's being used for these are documented in http://msdn.microsoft.com/library/default.asp?url=/library/en-us/memory/base/creatememoryresourcenotification.asp

Solutions
This issue can be fake or real .Do not panic if that workign ser is low , say in KBs or you get these errors when SQL Server starts up .

But if its the other way out then you can go through options i have mentioned below .If eventually nothing works , then you need to dig out who on the server needs so much of memory that makes OS to trim SQL Server's working set .

KNOWN ISSUES[Please go through each of it and apply as relevant]:



Microsoft has found some known issues that may cause the SQL Server 2005 process working set to be trimmed.



Issue1:



For more information, click the following article number to view the article in the Microsoft Knowledge Base:

905865 The sizes of the working sets of all the processes in a console session may be trimmed when you use Terminal Services to log on to or log off from a computer that is running Windows Server 2003



Issue2:



Computers that are running Windows Server 2003 can be too aggressive when they cache dirty buffers if there is an application performing buffered I/O, such as a file copy operation. This behavior can cause the working set in SQL Server to be trimmed. For more information, click the following article number to view the article in the Microsoft Knowledge Base:



920739 You may experience a decrease in overall system performance when you are copying files that are larger than approximately 500 MB in Windows Server 2003 Service Pack 1 or Service Pack 2



Issue3:



On a computer that is running SQL Server, applications may use the system cache too much. Therefore, the operating system will trim the working set of SQL Server or of other applications. If you notice that the application uses the system cache too much, you can use some memory management functions in the application. These functions control the system cache space that file IO operations can use in the application. For example, you can use the SetSystemFileCacheSize function and the GetSystemFileCacheSize function to control the system cache space that file IO operations can use.



Issue4:



Additionally, third-party device drivers that use the MmAllocateContiguousMemory function and specify the value of the HighestAcceptableAddress parameter to less than 4 gigabytes (GB) may also cause the SQL Server 2005 64-bit working set to be trimmed.



Currently, the iLO Management Channel Interface Driver (Cpqcidrv.sys) from Hewlett-Packard is known to cause this issue on x64 editions of SQL Server 2005.

The integrated Lights-Out Management (iLO) Interface Driver update for Windows Server 2003 is available at the following Hewlett-Packard Web site:

http://h20000.www2.hp.com/bizsupport/TechSupport/Document.jsp?lang=en&cc=us&objectID=c00688313&jumpid=reg_R1002_USEN

The problem is fixed in this driver update.



Issue5:



Broadcom driver bxvbda.sys caused memory trimming via MmAllocateContiguousMemory()

This driver is in Windows 2003 SP2, and OEM'ed from DELL and HP, but originally from Broadcom.



Disabling the advanced functionality from the driver may fully resolve the issue. In order to do that you have to do 2 things:

- On Windows 2003 Sp1, apply the hotfix download from http://support.microsoft.com/default.aspx?scid=kb;EN-US;936594 and perform the below mentioned registry changes. On Windows 2003 Sp2, just perform the below mentioned registry changes.

- Disable TCP Chimney, TCPA, and RSS using the registry entries

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters]

"DisableTaskOffload"=dword:00000001

"EnableTCPA"=dword:00000000

"EnableRSS"=dword:00000000

"EnableTCPChimney"=dword:00000000



TCP Chimney is a stateful offload. TCP Chimney offload enables TCP/IP processing to be offloaded to network adapters that can handle the TCP/IP processing in hardware. Each TCP Chimney offload-capable network adapter has a finite number of connections that it can support in hardware. TCP connections will be offloaded to hardware as long as the hardware can support these connections. After the hardware offload connection limit has been reached, all additional connections are handled by the host stack.

Side effects: the NIC will actually function better



Article on the affect of the setting: http://support.microsoft.com/default.aspx?scid=kb;EN-US;912222

Article on how this setting affects SQL operations: http://support.microsoft.com/default.aspx?scid=kb;EN-US;918483



Issue 6:

938486 A Windows Server 2003-based computer becomes unresponsive because of a memory manager trimming operation that is caused by an indeterminate module that requests lots of memory

http://support.microsoft.com/default.aspx?scid=kb;EN-US;938486

This kernel hotfix limits the number of trim operation per process to 8,192 pages. Previously, the memory manager would try to trim one-quarter of the working set size of a process. This can help to a certain extent on some of the processes being paged out.





ACTION PLAN that you can follow :

1. Cap SQL Server Max server memory to 75% of RAM and min to say , 1 GB .
2. Increasing the LowMemoryThreshold value.
Set the LowMemoryThreshold value (in MB), so the OS will notify applications such as SQL on low memory conditions much earlier.

In Regedit -> go to

HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\SessionManager\MemoryManagement

Right click on the right pane,

Select New -> select click DWORD Value -> enter LowMemoryThreshold

Double Click LowMemoryThreshold -> value (choose decimal) -> 650

Reboot required to take effect.

Default values per MSDN:

“The default level of available memory that signals a low-memory-resource notification event is approximately 32 MB per 4 GB, to a maximum of 64 MB. The default level that signals a high-memory-resource notification event is three times the default low-memory value.”



3. Disable TCP Chimney, TCPA, and RSS using the registry entries [mentioned above in known issues # 5]

4.Reboot the server for the registry values to take effect .

5. If needed be involve the OS(platforms) team to find out if there is any memory leak (chance of this is very less, but still) .



References :
http://support.microsoft.com/kb/918483

GOOD NEWS :
Lock Pages in memory option is available in SQL Server STD edition from SQL Server 2005 SP3 CU4 and SQL Server 2008 SP1 CU2 onwards .

CAUTION:
LPIM should be the last resort that you should opt for .In worst case if you have to opt for it ,make sure you cap SQL Server Max server memory to 75% of RAm or might be less .


Happy Learning !!!

Tuesday, October 14, 2008

To wihch database does the object belongs

You know the object name but you do not know to which database does it belongs :) ..

Try msforeacdb :
sp_msforeachdb 'use ?;print "?";select name from sys.sysobjects where name='''''

How to detach All user databases ?

SQL 2005

===========

select 'EXEC sp_detach_db @dbname = N'''+RTRIM ( name)+''','''+'true''' from sys.sysdatabases where dbid>6 and name not like 'distribution'





SQL 2000

==========

select 'EXEC sp_detach_db @dbname = N'''+RTRIM ( name)+''','''+'true''' from sysdatabases where dbid>4 and name not like 'distribution'

hozzzaaat ..

Tuesday, June 24, 2008

ERROR 229: The EXECUTE permission was denied on the object 'xp_cmdshell', database 'mssqlsystemresource', schema 'sys'

I once got this error while experimenting :

ERROR 229: The EXECUTE permission was denied on the object 'xp_cmdshell', database 'mssqlsystemresource', schema 'sys'.

So seems like xp_cmdshell is an object in database 'mssqlsystemresource' and schema 'sys'. mssqlsystemresource cannot be accessed without DAC ...

In the first go I could not understand much ..But then i tried to run the same query ( it was a BCP query ) using the windows domain account it was successful ..

So it was clear that this is a permission issue on the sql login that I was using ..
Then after long time searching BOL i found that we need to set the database trustworthy .

ALTER DATABASE mydb SET TRUSTWORTHY ON ...and we were comfortable ..

The other way is to create the xp_cmdshell proxy account to impersonate the non-sysadmin users to allow them to access resources outside the SQL Server.

sp_xp_cmdshell_proxy_account [ NULL | { 'account_name' , 'password' } ]

otherwise you can also use the GUI ..

Hope this helps ..

Script that will help in Identifying query causing TempDB growth

I am not creating any indexes assuming we know how to create them

CREATE TABLE [dbo].[tempUsage](

[Date_Time] [datetime] NOT NULL,

[session_id] [smallint] NULL,

[sql_handle] [varbinary](64) NULL,

[SQL_Text] [varchar](8000) NULL,

[request_internal_objects_alloc_page_count] [bigint] NULL,

[request_internal_objects_dealloc_page_count] [bigint] NULL

) ON [PRIMARY]



GO



CREATE VIEW all_request_usage

AS

SELECT session_id, request_id,

SUM(internal_objects_alloc_page_count) AS request_internal_objects_alloc_page_count,

SUM(internal_objects_dealloc_page_count)AS request_internal_objects_dealloc_page_count

FROM sys.dm_db_task_space_usage

GROUP BY session_id, request_id;



GO



CREATE VIEW all_query_usage

AS

SELECT R1.session_id, R1.request_id,

R1.request_internal_objects_alloc_page_count, R1.request_internal_objects_dealloc_page_count,

R2.sql_handle, R2.statement_start_offset, R2.statement_end_offset, R2.plan_handle

FROM all_request_usage R1

INNER JOIN sys.dm_exec_requests R2 ON R1.session_id = R2.session_id and R1.request_id = R2.request_id;



GO



Let’s run the statement every minute to capture SQL Statement that is causing TempDB growth.

{

INSERT INTO tempUsage

SELECT

GetDate() As Date_Time,

R1.session_id,

R1.sql_handle,

Cast ( R2.text As Varchar(8000)) SQL_Text,

R1.request_internal_objects_alloc_page_count,

R1.request_internal_objects_dealloc_page_count

FROM all_query_usage AS R1

OUTER APPLY sys.dm_exec_sql_text(R1.sql_handle) AS R2

where R1.request_internal_objects_alloc_page_count <> 0;

}

Thursday, June 12, 2008

Installing Replication bits on a cluster after upgrading to SP2

Reciently we faced an issue where SQL Server was on SP2 ( clustered Enviornment) and we were not able to install replication ..This is fairly simple on a standalone machine but not on cluster ..

If you try to achieve it through ADD/Remove (appwiz.cpl) program you may get this error : The Installer has encountered an unexpected error. The Error code is 2819 .ControlProductclash1 on dialog Product clash needs a property linked to it .

The solution is to use the commandline setup ..we tried several combinations untill this was successful :

start /wait setup.exe /qb VS=TSQL31 INSTANCENAME=TSQL31 ADDLOCAL=SQL_Replication

Now , Replication is installed but its on RTM :) ...you still need to apply SP2 ..
Moreover , any CU after SP2 will fail as Replication bits are on RTM ...

use the same command with hotfix name instead of setup.exe

Can we detach the databses with Replication ON ..

The answer is no ........we cannot detach the database which is participating in replicaion ..

Consider a scenario where you have hundereds of databases replicated and you fall short of disk :D ...

Disabling/removing Replication will be costly and time consuming ...
But we can follow the simple steps below and everything will be fine ...

1.Run ALTER DATABASE database_name SET OFFLINE.
2.Move the file to the new location.
3.Run ALTER DATABASE database_name MODIFY FILE ( NAME = logical_name, FILENAME = 'new_path/os_file_name'.
4. Run ALTER DATABASE database_name SET ONLINE.

Trust me ...you are done :) ..

Comments are welcome as always ..

Sunday, February 17, 2008

Understanding Memory issues through Perfmon:Page faults

I have been using Perfmon almost daily, but never took pain to learn how to use it efficiently until I came across a situation …In this part we will discuss using perfmon to see memory bottleneck.
Before we proceed forward let us go through few definitions so that the rest of the stuff is easily understood.

Virtual memory: Its actually virtual …and from here the concept of paging comes in to picture .On a 32 bit machine its 2 gigs for user process by default ….So, if your RAM is 512 MB, the remaining stuff (I mean greater that 512 MB) that’s going to come from Virtual memory will page out to disk .SO operations worth 2 gigs will happen in VAS (VM) and as a storage RAM will be used and the moment RAM limit is exceeded …..Paging/Swapping .

Page: Committed data in VAS is written to Page tables .From there it goes to RAM or Pagefile .This data is written in to pages (in bytes) and its size is processor dependent. this page is fitted in to page frame in RAM…it is essential that , that frame must be filled with zeros before a page from disk is inserted into page frame .zeroing is essential as the page before zeroing may have information from any other process ..So when we restart Windows this zeroing activity takes place and complete by the virtual memory manager before the Windows comes up..And if there is some problem during zeroing then………….. :D …we will discuss some other time …

Page file: a space in the HDD to be used to save committed memory objects.

Reserved memory: the memory quota given to a process by the VM manager .So the process first has to consume this before the next quota is allocated …

Committed memory: The chunk from reserved memory that is actually being used by the process. Remember that whenever memory is committed the same amount of space is created on the pagefile so that if needed it can be flushed to disk.

Working Set: Physical memory assigned to a process by VM manager (or say OS).It will be always less than Private Bytes.

Page Fault : When the page needed is not found in the working set of the process but its with in the RAM (may be in some cache) its called as Soft page fault .But when the page is not at all found in the RAM , its paged out and called as Hard page fault.

So now, it’s sure that whenever there is a memory issue there will be high (hard) Page fault rate...If there is a constant high hard page fault rate it means that there is some data not found in the RAM for the working set of that process .So , it has to be fetched from the disk (which is going to be costly).And the hard page faults cause Paging .So is paging harmful ….No (we cannot avoid it ), but the excess of it is a signal that something is going wrong .Generally I have seen that keeping the page file to be equal to 1.5 times of RAM is ok ..Not bad.

In case of memory issue the counters that we need to see in Perfmon are Memory and Process...

The Sub counters for memory counter are: Page Faults/sec, Page Reads/sec, Page Writes/sec, Pages Input/sec, Pages Output/sec, Available bytes and nonpaged pool bytes.

The Sub counters for Process counter are: Page Faults/sec, working set, Private Bytes and Page File Bytes

So, % Soft page faults = ((Page fault/sec - Pages input/sec) x 100)/Page faults/sec

Now, if there are more hard page faults, there should be more pages output/sec (no of pages sent to disk to make room for the pages coming form disk due to hard page fault) since there is a limited working set and hence there has to be some space for the pages coming from the disk (hard page faults) ...Pages read/sec and pages input/sec should be almost similar but can be a bit different …So these counters can tell you the system story..One more counter that I forgot to mention is Available Bytes .Its the no. of bytes left for the process to allocate. To see how much SQL Server shares you can see Page Faults/sec and proportionate it with Total Page faults. Generally, hard page fault of 10% to 15% should be tolerated …Before, we totally blame memory let’s check the Disk performance as well...

Subtract Memory:Page reads/sec from logical disk:disk reads/sec .If the output is small (+ or -) then the memory is the bottleneck because this actually means that all the disk reading is done to find the faulting pages .Also if disk output/sec is high then you can check by finding the difference of disk output/sec and Disk write bytes/sec ÷ 4096 ßintel and fine its percentage in total Disk write bytes/sec

Similarly, you can then correlate it to SQL Server as well …..Once you are sure that memory is the bottleneck then we can proceed with the troubleshooting steps...
Next time we will try to dig into memory issues including memory leaks and also perfmon counters specific to SQL Server …

As always, your comments are welcome...