Search This Blog

Saturday, October 25, 2008

A significant part of sql server process memory has been paged out.

First of all :
There is no solution to this issue in Windows 2003 (You might have lost the confidence after reading this :) it fully).

This is the sample error that we get :
2007-01-23 16:30:10.14 spid1s A significant part of sql server process memory has been paged out. This may result in a performance degradation. Duration: 0 seconds. Working set (KB): 1086400, committed (KB): 2160928, memory utilization: 50%.

Lets try to understand this error first :

Duration : the time or interval or gap after which this error has been reported. Here 0 means that this is the first time on that day , that this error occured.You will see a fixed pattern of duration .

Working set :The memory pages kept in the RAM for the use of process/threads.Remember that this is set approximately by the OS and its the OS that can trim it or add to it if it see that too many pages ar ein RAM for a process and it needs to give minimum bare pages to some other process which it cannot .In our case its roughly 1 GB . Check wikipedia for a better definition (

Committed : The actual memory committed by SQL Server .NO one should be able to take this memory from SQL Server or deallocate it .Not even the OS .But it can be paged :) by the OS and thats what it does.

Memory Utilization : the actual memory utilization by SQL Server after its working set is paged out .In this case its 50 which means the remaining 50 has been paged out .Had it been 2% then the remaining 98% is paged out .This error will start apperaing once the working set of SQL Server trims equal to or more than 50%

Now , one can easily deduce that 2GB RAM was committed [see commited in the error] but only 1 GB was the working set (see the error again) .So 50% is the current working set as compared to its committed value .Remaining 50% in in the disk and hence SQL Server will have to go for Hard page fault in case the page its looking for is on the disk .

Many a times you will see that on a cluster it will simply failover .This is because of the same effect .

Why it happens ??

Every process needs A contiguous chunk of memory but memory is largely fragmented ...
Some EXEs needs large chunks of contiguous memory .So even if there is enough memory available , its fragmented .And thats the cause of this issue .

When troubleshooting these type of issues, the key is to find out why the physical memory on the server gets exhausted and creates a situation that forces sql server to scale its memory usage back. As long as the available memory is below the limits for which Windows keeps sending low memory notification, sql server has no option but to scale itself back to avoid poor performance.

More on the dynamic memory management capabilities of sql server is discussed in the topic

There are a few situations where SQL Server Process working set might be paged out by Windows despite these memory resource notification mechanism.

-> If windows is not sending the correct notifications to all listening processes at the right moment and thresholds.

-> If SQL Server is not responding fast enough to the low memory resource notification from Windows.

-> Conditions in Windows where working sets of all processes are trimmed.

-> Windows might decide to trim a certain percentage of working set of various or specific processes.

-> Incorrect usage of memory management functions by Device Drivers.

SQL Server memory management is designed to dynamically adjust the committed memory based on the amount of available memory on the system. SQL Server does this to avoid being paged out by the OS memory manager with the help of some APIs.

SQL Server registers with the memory resource notifications of Windows. If a low memory notification comes from Windows, sql server scales its memory usage down. When windows sends the high memory notification, sql server can grow its memory usage target. The Win32 API's being used for these are documented in

This issue can be fake or real .Do not panic if that workign ser is low , say in KBs or you get these errors when SQL Server starts up .

But if its the other way out then you can go through options i have mentioned below .If eventually nothing works , then you need to dig out who on the server needs so much of memory that makes OS to trim SQL Server's working set .

KNOWN ISSUES[Please go through each of it and apply as relevant]:

Microsoft has found some known issues that may cause the SQL Server 2005 process working set to be trimmed.


For more information, click the following article number to view the article in the Microsoft Knowledge Base:

905865 The sizes of the working sets of all the processes in a console session may be trimmed when you use Terminal Services to log on to or log off from a computer that is running Windows Server 2003


Computers that are running Windows Server 2003 can be too aggressive when they cache dirty buffers if there is an application performing buffered I/O, such as a file copy operation. This behavior can cause the working set in SQL Server to be trimmed. For more information, click the following article number to view the article in the Microsoft Knowledge Base:

920739 You may experience a decrease in overall system performance when you are copying files that are larger than approximately 500 MB in Windows Server 2003 Service Pack 1 or Service Pack 2


On a computer that is running SQL Server, applications may use the system cache too much. Therefore, the operating system will trim the working set of SQL Server or of other applications. If you notice that the application uses the system cache too much, you can use some memory management functions in the application. These functions control the system cache space that file IO operations can use in the application. For example, you can use the SetSystemFileCacheSize function and the GetSystemFileCacheSize function to control the system cache space that file IO operations can use.


Additionally, third-party device drivers that use the MmAllocateContiguousMemory function and specify the value of the HighestAcceptableAddress parameter to less than 4 gigabytes (GB) may also cause the SQL Server 2005 64-bit working set to be trimmed.

Currently, the iLO Management Channel Interface Driver (Cpqcidrv.sys) from Hewlett-Packard is known to cause this issue on x64 editions of SQL Server 2005.

The integrated Lights-Out Management (iLO) Interface Driver update for Windows Server 2003 is available at the following Hewlett-Packard Web site:

The problem is fixed in this driver update.


Broadcom driver bxvbda.sys caused memory trimming via MmAllocateContiguousMemory()

This driver is in Windows 2003 SP2, and OEM'ed from DELL and HP, but originally from Broadcom.

Disabling the advanced functionality from the driver may fully resolve the issue. In order to do that you have to do 2 things:

- On Windows 2003 Sp1, apply the hotfix download from;EN-US;936594 and perform the below mentioned registry changes. On Windows 2003 Sp2, just perform the below mentioned registry changes.

- Disable TCP Chimney, TCPA, and RSS using the registry entries






TCP Chimney is a stateful offload. TCP Chimney offload enables TCP/IP processing to be offloaded to network adapters that can handle the TCP/IP processing in hardware. Each TCP Chimney offload-capable network adapter has a finite number of connections that it can support in hardware. TCP connections will be offloaded to hardware as long as the hardware can support these connections. After the hardware offload connection limit has been reached, all additional connections are handled by the host stack.

Side effects: the NIC will actually function better

Article on the affect of the setting:;EN-US;912222

Article on how this setting affects SQL operations:;EN-US;918483

Issue 6:

938486 A Windows Server 2003-based computer becomes unresponsive because of a memory manager trimming operation that is caused by an indeterminate module that requests lots of memory;EN-US;938486

This kernel hotfix limits the number of trim operation per process to 8,192 pages. Previously, the memory manager would try to trim one-quarter of the working set size of a process. This can help to a certain extent on some of the processes being paged out.

ACTION PLAN that you can follow :

1. Cap SQL Server Max server memory to 75% of RAM and min to say , 1 GB .
2. Increasing the LowMemoryThreshold value.
Set the LowMemoryThreshold value (in MB), so the OS will notify applications such as SQL on low memory conditions much earlier.

In Regedit -> go to


Right click on the right pane,

Select New -> select click DWORD Value -> enter LowMemoryThreshold

Double Click LowMemoryThreshold -> value (choose decimal) -> 650

Reboot required to take effect.

Default values per MSDN:

“The default level of available memory that signals a low-memory-resource notification event is approximately 32 MB per 4 GB, to a maximum of 64 MB. The default level that signals a high-memory-resource notification event is three times the default low-memory value.”

3. Disable TCP Chimney, TCPA, and RSS using the registry entries [mentioned above in known issues # 5]

4.Reboot the server for the registry values to take effect .

5. If needed be involve the OS(platforms) team to find out if there is any memory leak (chance of this is very less, but still) .

References :

Lock Pages in memory option is available in SQL Server STD edition from SQL Server 2005 SP3 CU4 and SQL Server 2008 SP1 CU2 onwards .

LPIM should be the last resort that you should opt for .In worst case if you have to opt for it ,make sure you cap SQL Server Max server memory to 75% of RAm or might be less .

Happy Learning !!!

Tuesday, October 14, 2008

To wihch database does the object belongs

You know the object name but you do not know to which database does it belongs :) ..

Try msforeacdb :
sp_msforeachdb 'use ?;print "?";select name from sys.sysobjects where name='''''

How to detach All user databases ?

SQL 2005


select 'EXEC sp_detach_db @dbname = N'''+RTRIM ( name)+''','''+'true''' from sys.sysdatabases where dbid>6 and name not like 'distribution'

SQL 2000


select 'EXEC sp_detach_db @dbname = N'''+RTRIM ( name)+''','''+'true''' from sysdatabases where dbid>4 and name not like 'distribution'

hozzzaaat ..

Tuesday, June 24, 2008

ERROR 229: The EXECUTE permission was denied on the object 'xp_cmdshell', database 'mssqlsystemresource', schema 'sys'

I once got this error while experimenting :

ERROR 229: The EXECUTE permission was denied on the object 'xp_cmdshell', database 'mssqlsystemresource', schema 'sys'.

So seems like xp_cmdshell is an object in database 'mssqlsystemresource' and schema 'sys'. mssqlsystemresource cannot be accessed without DAC ...

In the first go I could not understand much ..But then i tried to run the same query ( it was a BCP query ) using the windows domain account it was successful ..

So it was clear that this is a permission issue on the sql login that I was using ..
Then after long time searching BOL i found that we need to set the database trustworthy .

ALTER DATABASE mydb SET TRUSTWORTHY ON ...and we were comfortable ..

The other way is to create the xp_cmdshell proxy account to impersonate the non-sysadmin users to allow them to access resources outside the SQL Server.

sp_xp_cmdshell_proxy_account [ NULL | { 'account_name' , 'password' } ]

otherwise you can also use the GUI ..

Hope this helps ..

Script that will help in Identifying query causing TempDB growth

I am not creating any indexes assuming we know how to create them

CREATE TABLE [dbo].[tempUsage](

[Date_Time] [datetime] NOT NULL,

[session_id] [smallint] NULL,

[sql_handle] [varbinary](64) NULL,

[SQL_Text] [varchar](8000) NULL,

[request_internal_objects_alloc_page_count] [bigint] NULL,

[request_internal_objects_dealloc_page_count] [bigint] NULL



CREATE VIEW all_request_usage


SELECT session_id, request_id,

SUM(internal_objects_alloc_page_count) AS request_internal_objects_alloc_page_count,

SUM(internal_objects_dealloc_page_count)AS request_internal_objects_dealloc_page_count

FROM sys.dm_db_task_space_usage

GROUP BY session_id, request_id;


CREATE VIEW all_query_usage


SELECT R1.session_id, R1.request_id,

R1.request_internal_objects_alloc_page_count, R1.request_internal_objects_dealloc_page_count,

R2.sql_handle, R2.statement_start_offset, R2.statement_end_offset, R2.plan_handle

FROM all_request_usage R1

INNER JOIN sys.dm_exec_requests R2 ON R1.session_id = R2.session_id and R1.request_id = R2.request_id;


Let’s run the statement every minute to capture SQL Statement that is causing TempDB growth.




GetDate() As Date_Time,



Cast ( R2.text As Varchar(8000)) SQL_Text,



FROM all_query_usage AS R1

OUTER APPLY sys.dm_exec_sql_text(R1.sql_handle) AS R2

where R1.request_internal_objects_alloc_page_count <> 0;


Thursday, June 12, 2008

Installing Replication bits on a cluster after upgrading to SP2

Reciently we faced an issue where SQL Server was on SP2 ( clustered Enviornment) and we were not able to install replication ..This is fairly simple on a standalone machine but not on cluster ..

If you try to achieve it through ADD/Remove (appwiz.cpl) program you may get this error : The Installer has encountered an unexpected error. The Error code is 2819 .ControlProductclash1 on dialog Product clash needs a property linked to it .

The solution is to use the commandline setup ..we tried several combinations untill this was successful :

start /wait setup.exe /qb VS=TSQL31 INSTANCENAME=TSQL31 ADDLOCAL=SQL_Replication

Now , Replication is installed but its on RTM :) still need to apply SP2 ..
Moreover , any CU after SP2 will fail as Replication bits are on RTM ...

use the same command with hotfix name instead of setup.exe

Can we detach the databses with Replication ON ..

The answer is no ........we cannot detach the database which is participating in replicaion ..

Consider a scenario where you have hundereds of databases replicated and you fall short of disk :D ...

Disabling/removing Replication will be costly and time consuming ...
But we can follow the simple steps below and everything will be fine ...

2.Move the file to the new location.
3.Run ALTER DATABASE database_name MODIFY FILE ( NAME = logical_name, FILENAME = 'new_path/os_file_name'.
4. Run ALTER DATABASE database_name SET ONLINE.

Trust me are done :) ..

Comments are welcome as always ..

Sunday, February 17, 2008

Understanding Memory issues through Perfmon:Page faults

I have been using Perfmon almost daily, but never took pain to learn how to use it efficiently until I came across a situation …In this part we will discuss using perfmon to see memory bottleneck.
Before we proceed forward let us go through few definitions so that the rest of the stuff is easily understood.

Virtual memory: Its actually virtual …and from here the concept of paging comes in to picture .On a 32 bit machine its 2 gigs for user process by default ….So, if your RAM is 512 MB, the remaining stuff (I mean greater that 512 MB) that’s going to come from Virtual memory will page out to disk .SO operations worth 2 gigs will happen in VAS (VM) and as a storage RAM will be used and the moment RAM limit is exceeded …..Paging/Swapping .

Page: Committed data in VAS is written to Page tables .From there it goes to RAM or Pagefile .This data is written in to pages (in bytes) and its size is processor dependent. this page is fitted in to page frame in RAM…it is essential that , that frame must be filled with zeros before a page from disk is inserted into page frame .zeroing is essential as the page before zeroing may have information from any other process ..So when we restart Windows this zeroing activity takes place and complete by the virtual memory manager before the Windows comes up..And if there is some problem during zeroing then………….. :D …we will discuss some other time …

Page file: a space in the HDD to be used to save committed memory objects.

Reserved memory: the memory quota given to a process by the VM manager .So the process first has to consume this before the next quota is allocated …

Committed memory: The chunk from reserved memory that is actually being used by the process. Remember that whenever memory is committed the same amount of space is created on the pagefile so that if needed it can be flushed to disk.

Working Set: Physical memory assigned to a process by VM manager (or say OS).It will be always less than Private Bytes.

Page Fault : When the page needed is not found in the working set of the process but its with in the RAM (may be in some cache) its called as Soft page fault .But when the page is not at all found in the RAM , its paged out and called as Hard page fault.

So now, it’s sure that whenever there is a memory issue there will be high (hard) Page fault rate...If there is a constant high hard page fault rate it means that there is some data not found in the RAM for the working set of that process .So , it has to be fetched from the disk (which is going to be costly).And the hard page faults cause Paging .So is paging harmful ….No (we cannot avoid it ), but the excess of it is a signal that something is going wrong .Generally I have seen that keeping the page file to be equal to 1.5 times of RAM is ok ..Not bad.

In case of memory issue the counters that we need to see in Perfmon are Memory and Process...

The Sub counters for memory counter are: Page Faults/sec, Page Reads/sec, Page Writes/sec, Pages Input/sec, Pages Output/sec, Available bytes and nonpaged pool bytes.

The Sub counters for Process counter are: Page Faults/sec, working set, Private Bytes and Page File Bytes

So, % Soft page faults = ((Page fault/sec - Pages input/sec) x 100)/Page faults/sec

Now, if there are more hard page faults, there should be more pages output/sec (no of pages sent to disk to make room for the pages coming form disk due to hard page fault) since there is a limited working set and hence there has to be some space for the pages coming from the disk (hard page faults) ...Pages read/sec and pages input/sec should be almost similar but can be a bit different …So these counters can tell you the system story..One more counter that I forgot to mention is Available Bytes .Its the no. of bytes left for the process to allocate. To see how much SQL Server shares you can see Page Faults/sec and proportionate it with Total Page faults. Generally, hard page fault of 10% to 15% should be tolerated …Before, we totally blame memory let’s check the Disk performance as well...

Subtract Memory:Page reads/sec from logical disk:disk reads/sec .If the output is small (+ or -) then the memory is the bottleneck because this actually means that all the disk reading is done to find the faulting pages .Also if disk output/sec is high then you can check by finding the difference of disk output/sec and Disk write bytes/sec ÷ 4096 ├čintel and fine its percentage in total Disk write bytes/sec

Similarly, you can then correlate it to SQL Server as well …..Once you are sure that memory is the bottleneck then we can proceed with the troubleshooting steps...
Next time we will try to dig into memory issues including memory leaks and also perfmon counters specific to SQL Server …

As always, your comments are welcome...

Wednesday, February 13, 2008

Shall We use /PAE and /3GB switches together views

Physical Address Extension (PAE) refers to a feature of x86 and x86-64 processors that allows the OS to see more than regular 4 gigs of physical RAM on a 32-bit machine.To do that we need to specify /PAE switch in the boot.ini file and reboot the machine so that this setting takes effect.

So when you enable PAE ,the physical address size is increased from 32 bits to 36 bits which is equal to 64 GB. Note that the 32-bit size of virtual address is never changed, and it remains 4 GB.The OS uses PTE ( Page Table Entries) to map this 4 GB address space onto the 64 GB of total memory.Now any APP which needs access to more than 4 GiB of memory,has to have some special mechanism and on MS Windows this mechanism is called Address Windowing Extensions (AWE), while on Unix-like systems a variety of tricks are used, such as using mmap() to map regions of a file into and out of the address space as neededbut not yet adopted as a standard.

AWE is an MS API that allows the apps to access more physical memory than available in VAS ..The process through which it maps the VAS to physical memory is called as Windowing ...Not all the applications are AWE enabled ..but SQL Server is ..However , for doing that we need to have the locked pages in memory privilege enabled .One more benefit of using AWE in SQL Server is that the memory being allocated using AWE will not be written in Page files ..So,AWE can prevent Windows from swapping sensitive data from RAM to pagefile.

Now , I am pretty sure that we all know /3GB setting but just wanted to convey that with this setting the physical memory limit on Windows Server 2003 is 16 GB only ..So even if you have more than 16GB RAM you cannot see/use it :D...thats because it trims the PTE table rows ..

Now comes the real question ....Should we use /PAE and /3GB switch together .....well one negative factor i have already mentioned above. Secondly just for 1 GB I dont feel it would be worth .....Its set on my machine and i have no issues :)..

However , I have seen a few cases where these 2 combinations creates situations like Blue screen or black screen on machine upon boot up .....that is something because the OS goes for a toss while going through ZEROING activity ..But i think this stuff still needs some good research ...So, I do not say dont use it , but surely test it first .

So , summary is that I am not totally against it but i wont prefer it ....
Your choice :D .......

Tuesday, February 12, 2008

What all objects are consuming memory (and how much)

To determine what plans are in the cache and how often they're used we can use sys.dm_os_memory_cache_counters dm view .

SELECT TOP 6 LEFT([name], 20) as [name], LEFT([type], 20) as [type], [single_pages_kb] + [multi_pages_kb] AS cache_kb, [entries_count]FROM sys.dm_os_memory_cache_counters order by single_pages_kb + multi_pages_kb DESC

here :
CACHESTORE_OBJCP are compiled plans for stored procedures, functions and triggers. CACHESTORE_SQLCP are cached SQL statements or batches that aren't in stored procedures, functions and triggers. This includes any dynamic SQL or raw SELECT statements sent to the server. CACHESTORE_PHDR These are algebrizer trees for views, constraints and defaults. An algebrizer tree is the parsed SQL text that resolves the table and column names.
(you will find these counters in DBCC Memorystatus as well.Infact DBCC Memory Status uses this dm)

Generally you will find that CACHESTORE_SQLCP > CACHESTORE_OBJCP , but if the ratio of one to another is very high then we can say that there are more adhoc plans being run then Stored procedures.That is the reason the sal statements are going in to Plan cache.
You can also monitor the number of data pages in the plan cache using Performance Monitor (PerfMon) using SQLServer:Plan Cache object with the Cache Pages counter. There are instances for SQL Plans (CACHESTORE_SQLCP), Object Plans (CACHESTORE_OBJCP) and Bound Trees (CACHESTORE_PHDR). This will give you the same picture ..for e.g. under bound tree : multiply cache pages by 8. you will get the same output as in dbcc memorystatus and the dm we used above.

After this we can use sys.dm_exec_cached_plans and sys.dm_exec_sql_text dm views to find the queries :
select TOP 100 objtype, usecounts, p.size_in_bytes/1024 'IN KB', LEFT([sql].[text], 100) as [text]from sys.dm_exec_cached_plans pouter apply sys.dm_exec_sql_text (p.plan_handle) sqlORDER BY usecounts DESC

And then we can check the query plans(and size) for the one we have some doubt .

Now , SQL Server memory is primarily used to store data (buffer) and query plans (cache). We will try to find what tables and indexes are in the buffer memory of your server you can use sys.dm_os_buffer_descriptors DMV.Further , the query below can give us total currrent size of buffer pool .

select count(*) AS Buffered_Page_Count ,count(*) * 8192 / (1024 * 1024) as Buffer_Pool_MBfrom sys.dm_os_buffer_descriptors

After we have found the Bufferpool size , we can see which database is using more memory by runnig the query below :

SELECT LEFT(CASE database_id WHEN 32767 THEN 'ResourceDb' ELSE db_name(database_id) END, 20) AS Database_Name, count(*)AS Buffered_Page_Count, count(*) * 8192 / (1024 * 1024) as Buffer_Pool_MBFROM sys.dm_os_buffer_descriptorsGROUP BY db_name(database_id) ,database_idORDER BY Buffered_Page_Count DESC

And then we can go further at object level to see what all objects are consuming memory (and how much) .We can use the query below in each database we wish to :

SELECT TOP 25 obj.[name], i.[name], i.[type_desc], count(*)AS Buffered_Page_Count , count(*) * 8192 / (1024 * 1024) as Buffer_MB -- , ,obj.index_id, i.[name]FROM sys.dm_os_buffer_descriptors AS bd INNER JOIN ( SELECT object_name(object_id) AS name ,index_id ,allocation_unit_id, object_id FROM sys.allocation_units AS au INNER JOIN sys.partitions AS p ON au.container_id = p.hobt_id AND (au.type = 1 OR au.type = 3) UNION ALL SELECT object_name(object_id) AS name ,index_id, allocation_unit_id, object_id FROM sys.allocation_units AS au INNER JOIN sys.partitions AS p ON au.container_id = p.hobt_id AND au.type = 2 ) AS obj ON bd.allocation_unit_id = obj.allocation_unit_idLEFT JOIN sys.indexes i on i.object_id = obj.object_id AND i.index_id = obj.index_idWHERE database_id = db_id()GROUP BY, obj.index_id , i.[name],i.[type_desc]ORDER BY Buffered_Page_Count DESC

Auto Parameterization in SQL Server

Let us look at how and when SQL Server can parameterize dynamic SQL to reuse the query plan.Its through Parsing and compiling that the query plans get in to memory and then may or may not be reused.

Before executing a batch of SQL statements, SQL Server creates an execution plan (or query plan) for those statements. This is also referred to as compiling the SQL statements. One of the benefits of stored procedures is that SQL Serve will reuse compiled query plans.

Here is the example :
set showplan_all on
SELECT *FROM Sales.SalesOrderHeaderWHERE SalesOrderID = 56000GO
select stats.execution_count AS exec_count, p.size_in_bytes as [size], [sql].[text] as [plan_text]from sys.dm_exec_cached_plans pouter apply sys.dm_exec_sql_text (p.plan_handle) sqljoin sys.dm_exec_query_stats stats ON stats.plan_handle = p.plan_handleGO

Using the above mentioned DMV's we can find the number of times this plan has been executed, how big the plan is in bytes and the text of the plan.

Notice that the integer 56000 has been replaced by a variable ([@1])in the text of the query plan. Also notice that the list of parameters (only one parameter in this case) has been inserted at the beginning of the query plan. This is called "Simple Parameterization" (It was called "Auto Parameterization" in SQL Server 2000). If we run it twice we'd see the execution count increase to two but there would still only be a single plan.
SQL Server can also parameterize queries if they aren't identical.

lets check this :

SELECT *FROM Sales.SalesOrderHeaderWHERE SalesOrderID = 56000GO
SELECT * FROM Sales.SalesOrderHeader WHERE SalesOrderID = 56001GO
select *from Sales.SalesOrderHeaderwhere SalesOrderID = 56002GO
select stats.execution_count AS exec_count, p.size_in_bytes as [size], [sql].[text] as [plan_text]from sys.dm_exec_cached_plans pouter apply sys.dm_exec_sql_text (p.plan_handle) sqljoin sys.dm_exec_query_stats stats ON stats.plan_handle = p.plan_handleGO

Now let's look at something different.

SELECT *FROM Sales.SalesOrderHeaderWHERE SalesOrderID = 56000GO
SELECT * FROM AdventureWorks.Sales.SalesOrderHeader WHERE SalesOrderID = 56001GO
declare @i intset @i = 56004SELECT *FROM Sales.SalesOrderHeaderWHERE SalesOrderID = @iGO
select stats.execution_count AS exec_count, p.size_in_bytes as [size], [sql].[text] as [plan_text]from sys.dm_exec_cached_plans pouter apply sys.dm_exec_sql_text (p.plan_handle) sqljoin sys.dm_exec_query_stats stats ON stats.plan_handle = p.plan_handleGO

So,even slight changes in the text of the plan prevent SQL Server from reusing the query plans.If you want SQL Server to parameterize your SQL statements you have three options: stored procedures, sp_executesql or Forced Parameterization. Stored procedures almost always have a query plan created and reused.
Lets try without sp_executesql first
SELECT SUM(LineTotal) AS LineTotalFROM Sales.SalesOrderHeader HJOIN Sales.SalesOrderDetail D ON D.SalesOrderID = H.SalesOrderIDWHERE H.SalesOrderID = 56000GO
SELECT SUM(LineTotal) AS LineTotalFROM Sales.SalesOrderHeader HJOIN Sales.SalesOrderDetail D ON D.SalesOrderID = H.SalesOrderIDWHERE H.SalesOrderID = 56001GO
select stats.execution_count AS exec_count, p.size_in_bytes as [size], LEFT([sql].[text], 80) as [plan_text]from sys.dm_exec_cached_plans pouter apply sys.dm_exec_sql_text (p.plan_handle) sqljoin sys.dm_exec_query_stats stats ON stats.plan_handle = p.plan_handleGO

Now we will use sp_executesql
Now we will use sp_executesql .
EXEC sp_executesql N'SELECT SUM(LineTotal) AS LineTotalFROM Sales.SalesOrderHeader HJOIN Sales.SalesOrderDetail D ON D.SalesOrderID = H.SalesOrderIDWHERE H.SalesOrderID = @SalesOrderID', N'@SalesOrderID INT', 56000GO
EXEC sp_executesql N'SELECT SUM(LineTotal) AS LineTotalFROM Sales.SalesOrderHeader HJOIN Sales.SalesOrderDetail D ON D.SalesOrderID = H.SalesOrderIDWHERE H.SalesOrderID = @SalesOrderID', N'@SalesOrderID INT', 56005GO
select stats.execution_count AS exec_count, LEFT([sql].[text], 80) as [plan_text]from sys.dm_exec_cached_plans pouter apply sys.dm_exec_sql_text (p.plan_handle) sqljoin sys.dm_exec_query_stats stats ON stats.plan_handle = p.plan_handleGO
See the difference :D
You can set the parameterization at databse level by :

Awaiting your comments :)

Cannot open user default database. Login failed.

Well , there can be many different scenarios but I am pretty sure those will be simple to drill down ...

In my case the user had a group and there were 6 users in that group
The user created a login for that group in SQL Server and whenever the users wer etrying to login through SSMS they were getting the error below :

Msg 4064, Level 11, State 1, Server ServerName, Line 1
Cannot open user default database. Login failed.

out of 6 users only 4 were having this problem .So we changed the default database for the login to master.It still gives the same error.Then we then created a new login for one of the users from that group seperately and its default database was set to master .we logged in using that login successfully .that ws strange.We then deleted that login and again tried to login again via SSMS using same login thta we deleted .Ideally it should have given us an error something like "login failed for the user ....18456" but It was still giving us the same error .
Cannot open user default database. Login failed.
Login failed for user 'XXXX\YYYY'. (Microsoft SQL Server, Error: 4064)

This made us feel that this user should also a part of other groups and those groups must be pointing to some defult database that does not exist or in a condition described in KB307864 (

We then ran this query in master
select name,dbname from syslogins where dbname not in (select dbname from sysdatabases)
And we won the battle :D ...It showed us 5 logins which have a default database that does not exist.So we concluded that all or less users in the current group also exist in all or some of the other groups where the default database does not exist .We then changed the default database for those 5 logins to master and the issue was resolved.


What if we want to run sp_msforeachtable for all databases together ?

First ,here are few msforeach commands that you may find useful. :

sp_autostats for each table in a database :
EXEC sp_MSforeachtable @command1="print '?' EXEC sp_autostats '?'"

Reindexing each index in a database :
EXEC sp_MSforeachtable @command1="print '?' DBCC DBREINDEX ('?', ' ', 80)"

Index fragmentation for each index in a database :
EXEC sp_MSforeachtable @command1="print '?' DBCC SHOWCONTIG('?')"

Space used by each table in a database :
EXEC sp_MSforeachtable @command1="EXEC sp_spaceused '?'"

Updating statistics in a database :

Disabling triggers for a database :
EXEC sp_msforeachtable "ALTER TABLE ? DISABLE TRIGGER all"

finding total number of rows in all the tables in a database :
EXEC sp_msforeachtable 'select count (*) as ''?'' from ?'

to detach all the user databases :
select 'EXEC sp_detach_db @dbname = N'''+RTRIM ( name)+''','''+'true''' from sys.sysdatabases where dbid>4 and name not like 'distribution'

You know the object name, but don't know in which database it is.Run the following query in Text mode :

sp_msforeachdb 'use ?;print "?";select name from sys.sysobjects where name='''''

Now , lets say you need to run them for each database ..but you cannot expect it to run all together for all the databases just like sp_msforeachdb..

Lets try this way :
DECLARE @SQL NVarchar(4000)
SET @SQL = ''
SELECT @SQL = @SQL + 'EXEC ' + NAME + '..sp_MSforeachtable @command1=''DBCC DBREINDEX (''''*'''')'', @replacechar=''*''' + Char(13)
FROM MASTER..Sysdatabases

Here , what we are doing is initialising @sql with nothing .then first time it will get master.So it will put master plus the remaining string in @sql 'EXEC ' + NAME + '..sp_MSforeachtable @command1=''DBCC DBREINDEX (''''*'''')'', @replacechar=''*''' + Char(13).
So , in first flow @ will contain
EXEC master..sp_MSforeachtable @command1='DBCC DBREINDEX (''*'')', @replacechar='*'
In the second round it will have the previous string in @sql plus the same string for the next database name.To check this , just remove "@SQL+" stuff and run this script .You will see the string for the last DBID because each time the value in @SQL was overwritten :)..
Hence , this string will keep on increasing till the number of databases count ends ..So , you may have to increase the nvarchar limit if therer are a lot many databases ..

I have also copied few MSxxxxx stuff for your referance:

EXEC sp_MSforeachdb @command1="print '?' DBCC CHECKDB ('?')"
EXEC sp_MShelpcolumns 'Production.products'
EXEC sp_MSindexspace 'Production.products'
EXEC sp_MStablespace 'Production.products'

Hope this helps :D

Saturday, February 9, 2008

SQL Server 2005 set up takes a very long time to complete

Issue :
SQL Server 2005 set up takes a very long time to complete
Environment :
Database :SQL Server Enterprise Edition 2005 EN
OS : Windows server 2003

Root Cause :
Explanation : Why is it taking too much time to install SQL Server 2005
SQL Server 2005 Setup needs to configure security settings. When the primary domain has many external trust relationships with other domains or many lookups are performed at the same time, the time that is required to look up domain group names may increase significantly. Therefore, the time that is required to install SQL Server 2005 may increase. This is because by default, in the Microsoft Windows Server 2003 family and in the Microsoft Windows 2000 Server family, when the LookupAccountName function or the LsaLookupNames function resolves isolated names to security identifiers (SIDs), a remote procedure call (RPC) is made to domain controllers on external trusted domains. An isolated name is an ambiguous, non-domain-qualified user account.

Solutions :

Solution 1 :

Temporarily disable the network on the computer on which the SQL Server 2005 Setup program is running. To do this, either disconnect the network cable or type the following command at a command prompt: ipconfig /release *Con*. If you use ipconfig utility make sure to run ipconfig /renew *Con* after installation is done.
Note :This workaround will not succeed when you are installing a SQL Server failover cluster.

Solution 2 :

Follow the steps that are listed in the following Microsoft Knowledge base article to restrict the lookup of isolated names in external trusted domains:
818024 How to restrict the lookup of isolated names in external trusted domains by using the LsaLookupRestrictIsolatedNameLevel registry entry

Solution 3 :

Apply hotfix as per KB 910070
These are the exact steps that you need to perform.

Step 1 : download the zip file to some location and extract it .
Step 2 : Now you will be having the files mentioned below
• AS2005-KB919357-x86-ENU.exe
• SQL2005-KB919357-x86-ENU.exe
• DTS2005-KB919357-x86-ENU.exe
• NS2005-KB919357-x86-ENU.exe
• RS2005-KB919357-x86-ENU.exe
• SQLTools2005-KB919357-x86-ENU.exe
• SQLSupport2005-KB919357-x86-ENU.exe
Step 3 : At a command prompt, run these .exe files by specifying the /X switch to extract these .exe files into different folders.

• AS2005-KB919357-x86-ENU.exe /X:C:\QFE\AS
• SQL2005-KB919357-x86-ENU.exe /X:C:\QFE\SQL
• DTS2005-KB919357-x86-ENU.exe /X:C:\QFE\IS
• NS2005-KB919357-x86-ENU.ex /X:C:\QFE\NS
• RS2005-KB919357-x86-ENU.exe /X:C:\QFE\RS
• SQLTools2005-KB919357-x86-ENU.exe /X:C:\QFE\Tools
• SQLSupport2005-KB919357-x86-ENU.exe /X:C:\QFE\SqlSupport
Step 4 : After you extract these .exe files, the .msp files appear in the following folders: • C:\QFE\AS\hotfixas\files\sqlrun_as.msp

• C:\QFE\SQL\hotfixsql\files\sqlrun_sql.msp
• C:\QFE\IS\hotfixdts\files\sqlrun_dts.msp
• C:\QFE\NS\hotfixns\files\sqlrun_ns.msp
• C:\QFE\RS\hotfixrs\files\sqlrun_rs.msp
• C:\QFE\Tools\hotfixtools\files\sqlrun_tools.msp
• C:\QFE\SqlSupport\msp\sqlsupport.msp

Step 5 : Install SQL Server 2005 by running a command that resembles the following command:

\Servers\setup.exe SQLSUPPORTPATCH="C:\QFE\SqlSupport\msp\SqlSupport.msp" HOTFIXPATCH="C:\QFE\AS\hotfixas\files\sqlrun_as.msp;C:\QFE\SQL\hotfixsql\files\sqlrun_sql.msp;C:\QFE\IS\hotfixdts\files\sqlrun_dts.msp;C:\QFE\NS\hotfixns\files\sqlrun_ns.msp;C:\QFE\RS\hotfixrs\files\sqlrun_rs.msp;C:\QFE\Tools\hotfixtools\files\sqlrun_tools.msp"

This is the example that I followed :
C:\Enterprise\servers\setup.exe SQLSUPPORTPATCH="C:\QFE\SqlSupport\msp\SqlSupport.msp" HOTFIXPATCH="C:\QFE\AS\hotfixas\files\sqlrun_as.msp;C:\QFE\SQL\hotfixsql\files\sqlrun_sql.msp;C:\QFE\IS\hotfixdts\files\sqlrun_dts.msp;C:\QFE\NS\hotfixns\files\sqlrun_ns.msp;C:\QFE\RS\hotfixrs\files\sqlrun_rs.msp;C:\QFE\Tools\hotfixtools\files\sqlrun_tools.msp"

Useful Links :
KB 910070 and KB 818024

What to Do if you have screwed up your SQL Server 2005 Setup

For Stand alone machine :
Suppose you failed SQL Server 2005 setup , tried again ..failed ..did some changes , deleted few folders etc ....etc ......and finally messed up what :)........

before blowing up the machine try this ..
open a notepad and copy this content :

Windows Registry Editor Version 5.00









"Label"="Force protocol encryption"


"Label"="Trust Server Certificate"


"ProtocolName"="Named Pipes"

"Name"="Default Pipe"

"ProtocolName"="Shared Memory"


"Name"="Default Port"

"Name"="KEEPALIVE (in milliseconds)"

"Name"="KEEPALIVEINTERVAL (in milliseconds)"


"Name"="Default Server Port"

"Name"="Default Client NIC"

Now save this file as .reg >> double click this file and import it ......
start the setup .......and cross your fingures .....
I tried this on STD edition ........
Actually this is the complete healthy registry key set needed for a successful setup ...
I have seen 50% problems get solved ..
All the best to you ......
Try at your own risk and take a complete registry backup .....

It is possible to delete the data/log files in SQL Express Edition even if the Instance is UP

Yes , that is true but thankfully only in SQL Express Edition :)

Research :This behavior is by design in SQL Express. Until the user database is used explicitly by user, sql will not recover/open that database so at the time of SQL startup anyone can delete/move the datafiles. Once we connect to database, following entry gets logged into the SQL errorlog: 2007-06-07 19:16:09.59 spid51 Starting up database 'Test2'. This behavior using FILEHANDLE.EXE.So the risk of file delete will be there till the application connects to SQL (or actually until the filehandle is acquired by the SQL server service)
Resolution / Workaround : The easiest workaround of this problem would be creating a startup stored procedure which will query any table from the user database. i.e.

1) Suppose we have a user database named 'TEST' in Express instance, create a stored procedure in master database which will query one row from this database:
use master
create procedure start
select top 1 id from TEST.dbo.testTable

2) Set this SP as startup stored procedure for this instance:
sp_procoption 'dbo.start','startup' ,'true'

Now on every startup, SQL will query this database table which will in turn open this database automatically.

Enjoi :)

Shrinking a Log file

What if your database is in Full recovery mode and the T-logs are huge :D.....

This indicates that either you have not taken the backups since long or the backups are failing .In Full recovery the T-Logs keeps on growing untill you take the backups .This is not so in simple recovery mode ...

Now , if you try to shrink the database or the T-log file , it will not .Because there may be uncommitted or not backed up transactions at the bottom of the virtual log file ..

To check this run DBCC Loginfo command .You will see the last value in the status column as 2 ..
to make it 0 :

1) take full backup ...
2) change the database recovery model to simple

and then shrink the logfile DBCC SHRINKFILE (msdblog);

if you do not want to take any of the steps above try this :
repeatedly run the following script until
DBCC logingo show the last entry as having a status of '0' ('2' should be move to one of the first entries)

CREATE TABLE DummyTrans (DummyColumn char (8000) not null)
DECLARE @Counter int
SELECT @Counter = 1
WHILE @Counter < 1000
SELECT @Counter = @Counter + 1

DBCC SHRINKFILE (logfile name )

Hope this helps ......

Avoid using Trace flags in Startup parameters

Symptoms :
Client did an inplace upgrade from 2000 to 2005 and Server CPU utilization showed quite high during idle times. The CPU utilization is 25% to 29% most of time even there is no user connections to the server.also the SSMS seems to be very slow in response ..

I checked every thing but still could not find any clue ....
I had 2 questions :

1) Why SQL Server is consuming 28-30% CPU when its not doing anything
2) Why everything seems drastically slow at 28% CPU when other instance is pretty fast at 57% CPU .....

I finally checked the Error logs and found that 3 trace flags were enabled in the startup parameter viz. T818,-T815 and -T8809


Error: 26054, Severity: 16, State: 1

Imagine if your SQL Server resource is not coming up and you are getting something like this in the errorlog:

2007-11-29 00:10:03.00 Server Error: 26054, Severity: 16, State: 1.
2007-11-29 00:10:03.00 Server Could not find any IP address that this SQL Server instance depends upon. Make sure that the cluster service is running, that the dependency relationship between SQL Server and Network Name resources is correct, and that the IP addresses on which this SQL Server instance depends are available. Error code: 0x103.
2007-11-29 00:10:03.00 Server Error: 17182, Severity: 16, State: 1.
2007-11-29 00:10:03.00 Server TDSSNIClient initialization failed with error 0x103, status code 0xa.
2007-11-29 00:10:03.00 Server Error: 17182, Severity: 16, State: 1.
2007-11-29 00:10:03.00 Server TDSSNIClient initialization failed with error 0x103, status code 0x1.
2007-11-29 00:10:03.00 Server Error: 17826, Severity: 18, State: 3.
2007-11-29 00:10:03.00 Server Could not start the network library because of an internal error in the network library. To determine the cause, review the errors immediately preceding this one in the error log.
2007-11-29 00:10:03.00 Server Error: 17120, Severity: 16, State: 1.
2007-11-29 00:10:03.00 Server SQL Server could not spawn FRunCM thread. Check the SQL Server error log and the Windows event logs for information about possible related problems.

all these errors are very different .But we need to see the first error ...
lets check the following registry key for all the instances we have on the cluster :
HKLM\Software\Microsoft\Microsoft SQL Server\MSSQL.1\Cluster(MSSQLSERVER)

Check the virtual server name has to be different for all the instances ........right ..
In my case we had 2 instances and both had same virtual server names ( dont ask why ) not sure what the other person had done in the past .

Change the name of the virtual server on all nodes as needed and run the checkpoint on registry or

Restoring System databases when LDF files have gone for a toss ..

Problem -

--Due to some Hardware problem SAN drive having LDF files for all of the system database lost.

--Now you have recreated a new drive with same drive letter but contents are not there.

--You have database backup for all the system databases and other user databases.

--You have MDF file for system database as well.

Solution :

1) first use rebuildm utility .If that fails then ,
2) Restore all SYSTEM databases on a test instance of SQL with “new” as prefix.( remember you have backups :D )

--Detach those databases from the test instance.

--On original instance of SQL rename all the previous system MDF files with “old” as prefix.

--Copied all the MDF & LDF files from test instance to this actual instance folder and rename data file named to actual one (like master.mdf, model.mdf, msdbdata.mdf).and log files as well .

--Gave proper permissions to all of the newly copied files if needed .

--Start SQL server and it starts successfully.

--Attach all user database and everything will work fine.

Hope it helps :)

DTExec: The package execution returned DTSER_FAILURE (1)

Is your M-Plan failing ......and you are getting this error :
Executed as user: TMMCADM\ClusterSQL. Microsoft (R) SQL Server Execute Package Utility Version 9.00.3042.00 for 32-bit Copyright (C) Microsoft Corp 1984-2005. All rights reserved. Started:
11:35:48 AM Progress: 2008-01-30 11:35:49.11 Source: {3764B470-DBB1-4A59-978D-E2CE331A1AF2} Executing query "DECLARE @Guid UNIQUEIDENTIFIER EXECUTE msdb..sp".: 100%
complete End Progress DTExec: The package execution returned DTSER_FAILURE (1). Started: 11:35:48 AM Finished: 11:35:49 AM Elapsed: 1.218 seconds. The package execution failed. The
step failed.

Check the sp_configure values and see if "allow updates" is set to 1 by mistake ..
reset it to 0 and you are done ......

Restoring Database on a network using UNC path

Have you ever tried to keep your database files away from the actual server ...
Say , your SQL Server is on Server A and datafiles on Server B it possible ..

Lets try :)

RESTORE DATABASE [TestCorruption_Load]
FROM DISK = N'\\abhay\Backup\TestCorruption_Load.bak'
MOVE N'TestCorruption_Load' TO N'\\abhay\Backup\abc.mdf',
MOVE N'TestCorruption_Load_log' TO N'\\abhay\Backup\abc.ldf'

Results :

Msg 5110, Level 16, State 2, Line 1
The file "\\abhay\Backup\abc.mdf" is on a network path that is not supported for database files.
Msg 3156, Level 16, State 3, Line 1
File 'TestCorruption_Load' cannot be restored to '\\abhay\Backup\abc.mdf'. Use WITH MOVE to identify a valid location for the file.
Msg 5110, Level 16, State 2, Line 1
The file "\\abhay\Backup\abc.ldf" is on a network path that is not supported for database files.
Msg 3156, Level 16, State 3, Line 1
File 'TestCorruption_Load_log' cannot be restored to '\\abhay\Backup\abc.ldf'. Use WITH MOVE to identify a valid location for the file.
Msg 3119, Level 16, State 1, Line 1
Problems were identified while planning for the RESTORE statement. Previous messages provide details.
Msg 3013, Level 16, State 1, Line 1
RESTORE DATABASE is terminating abnormally.

Yes , normally its not allowed and Microsoft does not support this ...

Now , enable trace flag 1807 (
dbcc traceon (3604,1807 , -1)

and here we go :

RESTORE DATABASE [TestCorruption_Load] FROM DISK = N'\\abhay\Backup\TestCorruption_Load.bak' WITH STATS = 10, RECOVERY , REPLACE , MOVE N'TestCorruption_Load' TO N'\\abhay\Backup\abc.mdf', MOVE N'TestCorruption_Load_log' TO N'\\abhay\Backup\abc.ldf'

Processed 912 pages for database 'TestCorruption_Load', file 'TestCorruption_Load' on file 1.
Processed 2 pages for database 'TestCorruption_Load', file 'TestCorruption_Load_log' on file 1.
RESTORE DATABASE successfully processed 914 pages in 1.264 seconds (5.917 MB/sec).

one interesting fact that I have noticed is : the performance is much faster here as compared to non UNC restore .....try it on the test machine ...

But again , this is risky and not supported yet ....
Hoping this will be suppported some day....


for decimal 1619 / hex 0x653 ecFavMoveOffline ec.h
This installation package could not be opened. Verify that the package exists and that you can access it, or contact
the application vendor to verify that this is a valid Windows Installer package.
SQL_1619_severity_21 sql_err Could not open tempdb. Cannot continue.
as an HRESULT: Severity: SUCCESS (0), FACILITY_NULL (0x0), Code 0x653 for decimal 1619 / hex 0x653
This installation package could not be opened.Verify that the package exists and that you can access it, or contact
the application vendor to verify that this is a valid Windows Installer package.

You will be getting this error when the setup will blow up in the very begening .

Root cause : Windows installer needs the SYSTEM user added to the folder where the SQL Server setup files are located default its added to every folder .But if someone plays around with OS user and groups there is a possibility that the "by default setting" goes away ..

Solution : add SYSTEM to the SQL Server setup folder and give full permission ....

SQL Server memory and CLR assemblies

I will keep this short .First of all CLR and JVM are built upon same concepts :) ..merging of the CLR with SQL Server 2005 gives SQL Server the ability to create database objects using modern object-oriented languages like VB.NET and C#because CLR Objects support complex logic and have better computation capabilities.Thats the reason its use is increasing day by day ..

many a times i have seen cases where people say that they have memory pressure due to CLR objects ..we always feel that ( include me as well ) CLR objects / assemblies will only consume MemToLeave ..but it also consumes the buffer pool..butttt most SQL CLR memory allocations occur outside of the SQL Buffer Pool in MemToLeave area (384 MB by default)..

[ Mem2Leave= 255 worker threads x .5MB +256 MBStack size<--called as -g switch]

Now , There are two types of memory pressures
1) Physical memory pressure based on the amount of available system memory
2) Virtual Address Space (VAS : 2 gigs by default on 32 bit architecture)memory pressure based on the number of available virtual addresses.

Query sys.dm_os_memory_clerks dmv :
select single_pages_kb + multi_pages_kb + virtual_memory_committed_kb from sys.dm_os_memory_clerks where type = 'MEMORYCLERK_SQLCLR'

Here single_pages_kb is buffer pool and multi_pages_kb is MemToLeave area..the same and better output can be seen in DBCC Memorystatus as well.

Now,SQL CLR can’t use so much memory that it causes external physical memory pressure (Don't ask why :D).

So, SQL CLR will not be alowed to use more than 256 MB by default..

Now , if you are getting errors like :

Error: 701 , Severity: 17, State: 123.
There is insufficient system memory to run this query.
failed to reserve a contiguous space of 65536 K

first thing I would suggest is concentrate on the CLR Assemblies and see if we can optimize it
Other stuff we can do is increasing Mem2Leave area by using -g384 or -g512 in the startup parameters (if we see that the buffer pool is not heavily used)...further we can check if there is no harm using /3gb switch in the boot.ini file ....keep server activity confined to SQL Server , keep you database finely tuned by using more SPs , updating stats , defragmenting disks , rebuilding indexes .....

if nothing is helping you move to 64 bit machines :D.......thts the last option....
Never , use CLR based assemblies on a Virtual server ....
Your comments are welcome ...

Tuesday, January 22, 2008

About myself

Hi Guys ,
Won't waste much time here defining myself .
Currently working with Microsoft , Bangalore as SQL Server Breakfix engineer .
The sole purpose of creating this Blog is to help and learn ..
So all are free to post and i will try to reply ASAP ......
Also will post some case studies and fixes/solution that are not KB's yet .....
Wish you all a happy learning and i will update this bolg with my first post soon...