Monday, December 30, 2019

SQL Server Memory Configuration

Configuring SQL Server memory settings is critical for a server performance. Learn how to monitor memory usage and avoid the most common configuration pitfalls

 

Correctly configuring SQL Server memory settings is critical for server performance, but one of the things that I frequently come across when reviewing SQL Server installations whilst working in CSS for MS, is just how many of them had not been set up with appropriate memory configuration settings, or, as in many cases, not set up in the way the administrators of the system had assumed they were; usually the DBAs thought the system was set up to use all e.g. 8GB of RAM, but no changes had been made to the OS or SQL Server configuration, so their (32-bit) SQL Server would only be accessing 2 GB, and reporting that it was using 1.6 GB.

(Edit 2013-08-26: Since writing this post nearly 5 years ago I’ve noticed an increasing trend in visits to this page via questions on forums or search engines relating to the fact that SQL Server has consumed all the memory on the system. The quick answer is because that is how 64-bit SQL Server behaves out of the box if left unconfigured. The rest of this post goes into the how and they why, as well as the resolution. Also, despite the fact that most systems are now 64-bit, I’ve left the 32-bit stuff in this post because I know there’s still a lot of 32-bit SQL Server systems out there).

The problem is due in part to the fact that on 32-bit systems configuration changes usually have to be made both in SQL Server and at the OS level, and in part to the sprawl of documentation available on configuring SQL Server’s memory settings, as opposed to a single jumping off point which runs through all the settings and considerations that need to be made.
Add to that the black art of establishing exactly how much memory SQL Server is using (most of the obvious options will only show how much memory the buffer pool is using) and it’s easy to see why it’s such a problem area.
In this post I’ll attempt to clear some of the smog and provide what I hope will be one document which answers most of the questions that arise about configuring SQL Server’s memory usage.
This discussion will cover configuring memory for SQL Server 2000, SQL Server 2005 and SQL Server 2008 (with the exception of the Resource Governor). This blog assumes an edition of SQL Server that is not internally limited in its memory usage.

32-bit or 64-bit SQL Server?

There’s a big difference between the memory configuration settings between 64-bit SQL Server and 32-bit SQL Server, so it’s not possible to start a discussion about SQL Server’s memory management without clarifying whether we are dealing with 32-bit versions or 64-bit versions of the product, as this is key to how much memory SQL Server can address, and (almost as importantly) how it addresses that memory.
Until fairly recently 32-bit software was ubiquitous. The server Windows operating systems were 32-bit, your desktop, usually Windows XP was 32-bit. Therefore, I’ll be focusing a fair bit on 32-bit SQL Server as this is what requires the most configuration, and also where most of the confusion lies.
So, here goes.

Windows memory architecture

A quick aside on Windows memory architecture first.
The amount of memory a 32-bit process can access is 2^32 or 4294967296 bytes, or 4 GB.
On a 32-bit Windows OS this 4 GB of memory is not all addressable by a single process. Instead, it’s partitioned by the OS into two address spaces. 2 GB is kept by the OS (commonly referred to as the kernel) and the remaining 2 GB is the user mode address space, or the area each application (process) will have access to. Whilst each user mode process gets 2 GB as its addressable memory range, the kernel mode area is shared between all processes. SQL Server runs in the user mode address space and is bound by default to the 2 GB of memory limit on a 32-bit OS.
This directly addressable memory will hereon be referred to by what it is more commonly known as, the virtual address space or VAS.

Configuring SQL Server’s memory settings

SQL Server’s default out-of-the-box memory limit is deliberately set to a very high value of 2147483647 MB, which basically means all available memory, but as you should now know, there’s no way it can actually use anywhere near that much memory, particularly on a 32-bit platform.
64-bit operating systems have a far far bigger address space open to them; 8 TB to be exact. Before you run off to your calculator to evaluate 2^64, the answer won’t be 8TB, but 8TB is what each user mode application gets due to current hardware and OS limitations. The kernel also gets 8TB and this kernel address space is shared by all processes, just as in 32-bit Windows.
Having said all that, I should point out that no current MS Windows OS can address more than 2 TB.
What this means for 64-bit SQL Server is that out of the box, it can address all the memory on a server without any special configuration changes either within SQL Server or at the OS level. The only thing you need to look at is a cap on its memory usage; capping memory usage is covered in the ‘max server memory’ and ‘min server memory’ section which is further down.

To /3GB or not to /3GB

So, as 32-bit applications are natively restricted to a 2 GB VAS, OS configuration tweaks are required to allow access to more than 2 GB, and these are covered next.

The first modification is one that, rather ironically, should be used as a last resort. Ideally, it should be used on the advice of Microsoft Support (PSS).
I’m choosing to get it out of the way now because the /3GB setting is probably the most well known and most misused.

/3GB allows a 32-bit process to increase its VAS to 3 GB by taking away 1 GB of address space from the kernel, and this is why it’s a last resort; there’s no such thing as a free lunch as the removal of 1 GB of addressable memory from the OS can introduce instability. More on that shortly.

To allow a 32-bit process to gain a 3 GB VAS you have to add the /3GB switch to the Windows boot.ini file.

As I stated, this can introduce system instability by starving the OS of System Page Table Entries (PTEs). A discussion about PTEs is out of the scope of this blog, but its effects can be dramatic and cause blue-screens. The good news is that this mainly affected Windows 2000 so you should be fine if you’re on a later Windows version.

If you’re still looking after a legacy system, there is some scope for manoeuvre here, by adding the /USERVA switch to the boot.ini it is possible to reduce the VAS increase from a straight 3 GB to a lower user-defined amount which will give the OS room to breathe, and thus resolve any instability issues.

The main reason you will be advised by PSS to use /3GB is if you are suffering VAS starvation issues, such as a bloated procedure cache as it can only reside in VAS memory (also see the next section on MemToLeave) because 32-bit version of SQL Server only allow database pages to reside in the part of the SQL Server memory cache (called the buffer pool) that is utilising awe enabled memory.

MemToLeave

(EDIT 2011-06-30: The correct terminology for this is Virtual Address Space Reservation.) Because of the inherent address space limitations of a 32-bit process, a certain amount of memory has to be set aside by SQL Server on startup that SQL Server uses for overheads. This memory is set aside in case it all gets used by the buffer pool.

COM objects, extended stored procs, third party backup solutions, some anti-virus apps, memory allocations exceeding 8K and the memory allocated to the threads SQL Server creates to service e.g. user connections come from a section of memory within the VAS but outside the buffer pool which is typically referred to as the MemToLeave area. This is 384 MB by default on an e.g. 2-proc 32-bit SQL. If you want to know more about how it is calculated, check Jonathan Kehayias’s post covering this.

0.5 MB is the default thread stack size for a thread on 32-bit Windows. 64-bit Windows has a default stack size of 2 MB or 4 MB depending on which 64-bit flavour of Windows you are running (AMD x64 or IA64).
SQL Server 2005 and beyond uses a formula to calculate the max worker threads setting which affects the size of the MemToLeave area.
There is a SQL Server startup parameter (-g) which can be used to increase the MemToLeave area, but again, only do this if advised by PSS (it’s ignored on 64-bit as MemToLeave or VAS reservation won’t be an issue on that architecture) as this will reduce the maximum amount of memory the buffer pool can therefore use.

4 GB of RAM and beyond

So, we know 32-bit SQL Server can use 2 GB out of the box, and up to 3 GB (with an OS tweak that is best avoided, if at all possible).
However, 32-bit SQL Server can benefit from much more memory than 3 GB with the help of OS and SQL Server configuration modifications which will be covered next.

/PAE

To address more than 4 GB of RAM on 32-bit Windows, the OS needs to have the /PAE switch added to the boot.ini file, although if your system supports hot-swappable memory you won’t need to add this as Windows should automatically be able to see the additional memory. If you’re not sure, take a look at how much memory the OS can see via System properties; if you have more than 4 GB installed and the OS is only showing 4 GB, review your boot.ini settings. I’m not mentioning specific Windows versions because the /PAE switch applies to all current 32-bit versions of Windows.
(EDIT 2011-06-30: For Windows Server 2008 you have to run ‘BCDEDIT /SET PAE FORCEENABLE’ from a CMD prompt running under administrator privileges).

Both /PAE and /3GB

Some systems have both /3GB and /PAE enabled.
This is fine as long as the system does not have more than 16 GB of RAM. Add any more memory and Windows will not recognise it because of the overhead required to manage the additional memory.

Clusters

No special configuration settings regarding memory settings are required for a cluster, but I thought I better mention clusters specifically because you won’t believe how many installations there are out there where there are different settings on different nodes within the same cluster.
So, make sure any OS setting changes like /3GB or /PAE are consistently applied across all nodes.

Enable AWE

After configuring the OS, you’ll need to configure SQL Server by enabling AWE (Address Windowing Extensions). AWE is in essence a technique for ‘paging’ in sections of memory beyond the default addressable range.
AWE can be enabled in SQL Server using Query Analyzer/SQL Server Management Studio (SSMS) via the following statements:

sp_configure 'show advanced options', 1
 RECONFIGURE
 GO
 sp_configure 'awe enabled', 1
 RECONFIGURE
 GO
 
AWE enablement is not a dynamic option and will require a SQL Server restart, so before you do that make sure the SQL Server service account has the ‘Lock Pages in Memory’ privilege assigned to it.
Once AWE has been enabled within SQL Server and the ‘Lock Pages in Memory’ privilege has been assigned you should be good to go after a restart.
(Edit:SQL 2008 R2 is the last version that will support AWE functionality. As of SQL Server 2012 the awe_enabled option is no longer supported. If you have a 32-bit install of SQL Server 2012 I strongly urge you to read that article and it’s implications on systems with more than 4 GB of RAM).

‘max server memory’ and ‘min server memory’

The more memory you give SQL Server, the greater the need to set an upper limit on how much it uses. When you start SQL Server it’ll ramp up its memory usage until it has used up all the memory it can access, which will either be an internal OS limit or a SQL Server configured limit.

A 32-bit SQL Server instance will therefore grab up to 2 GB if the workload demands it and it is on default settings.
An awe enabled SQL Server instance will go on using up all the memory on the system if the workload is there and an upper limit on its memory usage is not set.

Having said all this, the key thing to remember is that the ‘max server memory’ setting only controls the memory assigned to the buffer pool which is the memory cache SQL Server uses; every database page that the system reads or modifies is read into the buffer pool first. What this means is that there is a substantial area of memory which we cannot directly monitor or control (make sure you read the memtoleave section) but we have to ensure enough memory is available for it. It is from this area of memory that all allocations for e.g. COM objects, CLR functions, linked server queries, OLEDB providers and multi-page allocations (i.e. anything over 8K) are made from. This is what complicates sizing SQL Server’s memory usage correctly as every system will use the memtoleave area differently.

Setting a limit has the double-benefit of not starving the OS of resources and avoiding ‘Out of memory’ errors which can occur on SQL Server systems that may have a lot of memory. The latter (rather contradictory) situation can arise because SQL Server will try and allocate more memory when it is already at the system limit (if no upper limit has been set via the ‘max server memory’ setting) instead of freeing up memory it is already using.

Configuring memory for multiple instances

A third reason to set an upper limit is if you have more than one SQL Server instance installed on a single host, as this will stop the instances competing for memory.
Allocate a high enough ‘max server memory’ limit to each instance to allow it to do its job without running into memory starvation issues, whilst reserving the bulk of the memory for higher priority instances (if any) and the OS.
This is where benchmarking comes in handy.

To set a max server memory limit of 12 GB via Query Analyzer/SSMS:

sp_configure 'max server memory', 12288
 RECONFIGURE
 GO
 
SQL Server ramps up its memory usage because by default it is set to use no memory on startup. This is controlled by the ‘min server memory’ setting. Specifying this to a higher value has the benefit of reserving a set amount of memory for SQL Server from the off which can provide a slight performance benefit, especially on busy systems. It’s actually not uncommon to see ‘min server memory’ and ‘max server memory’ set to the same value, to reserve all of SQL Server’s memory straight away. The downside is SQL Server will take slightly longer to start up than if ‘min server memory’ was set to a low value.
Older SQL versions did not really release memory
This will probably get me into a bit of trouble, as there are KBs that clearly state that SQL Server releases memory when the OS is under pressure.
True, but only under a lot of pressure and SQL Server 2000 and it’s forerunners were notoriously bad at this – often requiring a sytem restart which is why it vital to set an upper limit via the ‘max server memory’ for any system (not just SQL Server 2000).

Memory corruption

Slightly off-topic, but this is an appropriate place to bring this up.
Certain builds of Windows 2000 and Windows Server 2003 contained a potentially serious memory corruption problem which affected SQL Server more than other applications, mainly because there are few applications that run on Windows that can utilise the amount of memory SQL Server does.
It’s difficult to overstate the problems this can cause, so make sure you’re on the appropriate Windows Service Packs if you’re running SQL Server on a PAE enabled system.

Another issue that arose in SQL Server 2000 SP4 was a bug that meant SQL Server only saw half the memory on awe enabled systems, although it was identified quickly and the hotfix for this was placed alongside the SP4 download.

32-bit SQL Server on 64-bit Windows

If you have a 32-bit SQL Server on 64-bit Windows the SQL Server process can access the entire 4 GB VAS.

Checking SQL Server’s memory usage

This is another area where there is lot of confusion, so below is a run-through of the most common methods for confirming SQL Server’s memory usage.

Ignore Task Manager

If you have an awe enabled SQL Server instance, do not rely on Task Manager to display memory usage as it does not show the AWE memory a process is using, so the memory usage figure it presents for the SQL Server process (sqlservr.exe) will be incorrect.

DBCC MEMORYSTATUS

Running the above command outputs the memory usage of SQL Server including how that memory is allocated, so unless you need to know how and where that memory is being used, the output it generates can be a bit bewildering. The important bits of this output pertaining to SQL Server’s total memory usage are as follows:

Buffer Counts                  Buffers
 ------------------------------ --------
 Committed                      3872
 Target                         65536
 Hashed                         2485
 Stolen Potential               60972
 External Reservation           0
 Min Free                       64
 Visible                        65536
 Available Paging File          702099
 
The key figures in the above output are committed, target and hashed.
Committed is the amount of memory in use by the buffer pool and includes AWE pages.

Target is how big SQL Server wants the buffer to grow, so you can infer from this whether SQL Server wants more memory or is releasing memory.
There’s an excellent KB on interpreting all the output INF: Using DBCC MEMORYSTATUS to Monitor SQL Server Memory Usage for SQL Server 2000 and How to use the DBCC MEMORYSTATUS command to monitor memory usage on SQL Server 2005.

[EDIT 2009-05:02: Remember the buffer count numbers refer to pages of memory which are 8K in SQL Server.]

System Monitor (perfmon)

Perfect way to get a quick reference on exactly how much memory SQL Server is using at that moment. Start System Monitor and add the SQL Server: Memory Manager: Total Server Memory (KB) counter.
Replace “SQL Server” with MSSQL$ and the name of the named instance if it’s not a default instance, e.g. MSSQL$INSTANCE1.

‘Total’ memory usage

When trying to establish exactly how much memory SQL Server is using it’s not just the buffer pool memory you have look at, but the MemToLeave area as well. The key point to bear in mind here is that it’s not only SQL Server that can make allocations from this latter area of memory but third party processes as well, which can make it impossible to precisely account for SQL Server’s absolute memory usage, contrary to some myths out there about calculating SQL Server’s memory usage via e.g. DBCC MEMORYSTATUS, as such methods can only account for SQL Server’s own memory allocations and not allocations by foreign processes.
[EDIT 2011-06-27: Soft NUMA section removed.]

64-bit

I mentioned at the start of this post that all you have to worry about for 64-bit SQL Server is setting a max memory limit as SQL Server can access all the memory current Windows operating systems can support, and 8 TB in total. That’s mostly true, with the exception of a certain privilege that the SQL Server service account needs, and that’s the ‘Lock Pages in Memory’  (LPIM) privilege.
This privilege is vital as it prevents the OS from paging out SQL Server memory to the swap file.

With the introduction of SQL Server 2005, this right was restricted on 64-bit Windows to only take effect on Enterprise Editions of SQL Server, so if you’re wondering why your huge new multi-gigabyte multi-core 64-bit system is paging like crazy, this might be why. [Edit: This has finally been reversed for both SQL Server 2005 and SQL Server 2008 Standard Editions: http://blogs.msdn.com/psssql/archive/2009/04/24/sql-server-locked-pages-and-standard-sku.aspx
Whilst we’re on the subject of paging on 64-bit SQL Server systems, take a look at the following KB:
How to reduce paging of buffer pool memory in the 64-bit version of SQL Server 2005 which covers issues a number of issues that cause SQL Server’s (Standard or Enterprise editions) memory to be paged out.

[EDIT 2013-09-19: More on LPIM. Memory management is enhanced in every version of SQL Server, and for SQL Server 2008 onwards the prevailing consenus is that this is no longer required, see this article for more information]

If your 64-bit system has hundreds of GB of RAM, don’t assign it all via the ‘max server memory’ setting. The advice if you have a lot of memory changes with each version of SQL Server. In SQL Server 2005 the (very) general recommendation was to leave aside 1 GB for every 16 GB, so if you had a 256 GB system this would mean setting ‘max server memory’ no higher than 240 GB.
For SQL Server 2008 the SQLCAT team posted a best practices article (based on an OLTP system) advising 1 GB to be set aside for every 4 cores, so if our 256 GB system had 32 cores that would be a ‘max server memory’ setting of 248 GB. The usual caveats of assuming this is a system dedicated to SQL Server apply.

Personally, I’d be more cautious and use the former strategy and then baseline the system   for a few weeks taking into account the System Monitor (perfmon) counter: Memory > Avalailable MBytes as this will show how much free memory the system has and I’d tweak the max memory setting accordingly, as every system will use CLR functions, linked server queries, 3rd party DLLs differently (i.e. all the stuff that runs in the db but does not use memory from the buffer pool. The buffer pool is the only facet of SQL Server’s memory usage that the ‘max server memory’ setting controls).

In summary…

The table below describes how much memory SQL Server can use, and assumes an edition of SQL Server that has no internal limitations as to how much memory it can use, e.g. Express and Workgroup editions are limited to 1 GB and 3GB respectively.

SQL Server type                              Installed physical memory
 ------------------   -------------------------------------  ------------------------------
                      Up to 4GB                              More than 4GB (/PAE enabled 1)
 32-bit SQL Server    Default memory usage    With /3GB 2    All available RAM3
                      2 GB            3 GB
 64-bit SQL Server    All available RAM 3
 
1 Not all 32-bit systems now need to have /PAE explicitly set in boot.ini for the OS to see more than 4 GB of RAM
2 Assuming /USERVA switch has not been used to tune memory usage to between 2 GB and 3 GB
  Assuming ‘max server memory’ is left on defaults, otherwise SQL Server will use no more memory than that stipulated by the ‘max server memory’setting.
When I started this post I wanted to keep it as short and succinct as possible, but I realised pretty quick that that was never gonna happen, as there’s a lot more to configuring SQL Server’s memory usage than simply setting a ‘max server memory’ limit.

It’s a complex undertaking, especially in a 32-bit environment. It’s not easy to cover all the pertinent points without branching off and describing the different areas of its memory architecture, although I’ve tried to provide the relevant information without going into too much detail.

The key thing to remember is that the ‘max server memory’ setting is a misnomer and only accounts for the memory assigned to the buffer pool. Memory assigned to threads, linked server queries, the CLR and a host of other processes utilise memory from outside the buffer pool.

Hopefully, this post has helped clarify a little of how to configure SQL Server’s memory usage and provided enough information to answer most memory configuration related questions, although, as you might have guessed, there’s no black-and-white way of precisely determining SQL Server’s memory usage as there are so many external processes that can make allocations from within SQL Server’s address space.


Fixing damaged pages using Page Restore or Manual inserts!!

Here's an interesting scenario that cropped up today. You have a database on a RAID array that failed and has zero'd out a page. How can you get the data back?

There are two ways to do it, depending on the database recovery model and version of SQL Server - single-page restore or manual insert/select - both of which rely on you having a backup of the database. You can use single-page restore if you're on SQL Server 2005 and the database is in Full or Bulk-Logged recovery mode, otherwise you need to use the manual method, and that will only work as long as you know the data being salvaged hasn't changed since the last backup.

Let's try them both. Here's a script to create a test database and make a backup of it:

-- Create the database.

USE master;
GO
CREATE DATABASE dbccpagetest;
GO

ALTER DATABASE dbccpagetest SET RECOVERY FULL;
GO

-- Create a table to play with.

USE dbccpagetest;
GO
CREATE TABLE sales (
salesID INT IDENTITY,
customerID INT DEFAULT CONVERT (INT, 100000 * RAND ()),
salesDate DATETIME DEFAULT GETDATE (),
salesAmount MONEY);

CREATE CLUSTERED INDEX salesCI ON sales (salesID);
GO

 --Populate the table

SET NOCOUNT ON;
GO
DECLARE @count INT
SELECT @count = 0
WHILE (@count < 5000)
BEGIN
INSERT INTO sales (salesAmount) VALUES (100 * RAND ());
SELECT @count = @count + 1
END;
GO

-- Take a full backup.

BACKUP DATABASE dbccpagetest TO DISK = 'C:\dbccpagetest.bak' WITH INIT;
GO

I'm going to simulate our scenario by shutting down the database and using a hex editor to zero out page 158 of the database. (This translates to byte offset 1294336 of the file being zero'd for 8192 bytes).

Now if I run checkdb, I get the following:

Msg 8909, Level 16, State 1, Line 1

Table error: Object ID 0, index ID -1, partition ID 0, alloc unit ID 0 (type Unknown), page ID (1:158) contains an incorrect page ID in its page header. The PageId in the page header = (0:0).

CHECKDB found 0 allocation errors and 1 consistency errors not associated with any single object.

Msg 8928, Level 16, State 1, Line 1

Object ID 2073058421, index ID 1, partition ID 72057594038386688, alloc unit ID 72057594042384384 (type In-row data): Page (1:158) could not be processed. See other errors for details.

CHECKDB found 0 allocation errors and 1 consistency errors in table 'sales' (object ID 2073058421).

CHECKDB found 0 allocation errors and 2 consistency errors in database 'dbccpagetest'.

repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB (dbccpagetest).

What does the page look like?
DBCC TRACEON (3604);
GO
DBCC PAGE (dbccpagetest, 1, 158, 3);
GO

DBCC execution completed. If DBCC printed error messages, contact your system administrator.

PAGE: (0:0)
BUFFER:
BUF @0x02C0632C
bpage = 0x04C12000 bhash = 0x00000000 bpageno = (1:158)
bdbid = 9 breferences = 0 bUse1 = 37241
bstat = 0xc00009 blog = 0x89898989 bnext = 0x00000000

PAGE HEADER:
Page @0x04C12000

m_pageId = (0:0) m_headerVersion = 0 m_type = 0
m_typeFlagBits = 0x0 m_level = 0 m_flagBits = 0x200
m_objId (AllocUnitId.idObj) = 0 m_indexId (AllocUnitId.idInd) = 0 Metadata: AllocUnitId = 0

Metadata: PartitionId = 0 Metadata: IndexId = -1 Metadata: ObjectId = 0 
m_prevPage = (0:0) m_nextPage = (0:0) pminlen = 0
m_slotCnt = 0 m_freeCnt = 0 m_freeData = 0
m_reservedCnt = 0 m_lsn = (0:0:0) m_xactReserved = 0
m_xdesId = (0:0) m_ghostRecCnt = 0 m_tornBits = 16777216

Allocation Status

GAM (1:2) = ALLOCATED SGAM (1:3) = NOT ALLOCATED

PFS (1:1) = 0x60 MIXED_EXT ALLOCATED 0_PCT_FULL DIFF (1:6) = NOT CHANGED

ML (1:7) = NOT MIN_LOGGED

Msg 2514, Level 16, State 5, Line 2

DBCC PAGE error: Invalid page type - dump style 3 not possible.

Note the error at the end of the output - DBCC PAGE can't do an in-depth dump because it doesn't know what page type the page is. Let's try a full page hex dump using dump style 2 instead:

DBCC PAGE (dbccpagetest, 1, 158, 2);
GO
PAGE: (0:0)
DATA:
 
Memory Dump @0x44F3C000

44F3C000: 00000000 00020000 00000000 00000000 †................
44F3C010: 00000000 00000000 00000000 00000000 †................
44F3DFE0: 00000000 00000000 00000000 00000000 †................
44F3DFF0: 00000000 00000000 00000000 00000000 †................

DBCC execution completed. If DBCC printed error messages, contact your system administrator.

It really is all zero. First we'll fix it using page restore.

USE master;
GO
RESTORE DATABASE dbccpagetest PAGE = '1:158' FROM DISK = 'C:\dbccpagetest.bak';

GO
Processed 1 pages for database 'dbccpagetest', file 'dbccpagetest' on file 1.

The roll forward start point is now at log sequence number (LSN) 32000000047000001. Additional roll forward past LSN 33000000001700001 is required to complete the restore sequence.

RESTORE DATABASE ... FILE= successfully processed 1 pages in 0.176 seconds (0.046 MB/sec).

Isn't that cool? You can restore up to 1000 single pages from a backup at a time. For VLDBs, this cuts the recovery time WAY down. Now we need to roll forward the log. We don't have any more log backups so we can finish the roll forward by backing up and restoring the tail of the log.

-- Need to complete roll forward. Backup the log tail...

BACKUP LOG dbccpagetest TO DISK = 'C:\dbccpagetest_log.bak' WITH INIT;
GO
-- ... and restore it again.
 
 RESTORE LOG dbccpagetest FROM DISK = 'C:\dbccpagetest_log.bak';
GO

Processed 5 pages for database 'dbccpagetest', file 'dbccpagetest_log' on file 1.

BACKUP LOG successfully processed 5 pages in 0.146 seconds (0.248 MB/sec).
 
Processed 0 pages for database 'dbccpagetest', file 'dbccpagetest' on file 1.

RESTORE LOG successfully processed 0 pages in 0.004 seconds (0.000 MB/sec).

And now we should have a clean database:

DBCC CHECKDB (dbccpagetest) WITH NO_INFOMSGS;
GO

Command(s) completed successfully.

Easy. But what if we can't do a page restore? Assuming I've corrupted the database in exactly the same way again, the first thing is to do is make sure we can restore the backup and then see what data range is on that page:

RESTORE DATABASE dbccpagetest_copy FROM DISK = 'C:\dbccpagetest.bak' WITH
MOVE N'dbccpagetest' TO N'C:\dbccpagetest_copy.mdf',
MOVE N'dbccpagetest_log' TO N'C:\dbccpagetest_log.ldf',
REPLACE;
GO
DBCC PAGE (dbccpagetest_copy, 1, 158, 3);
GO

Processed 184 pages for database 'dbccpagetest_copy', file 'dbccpagetest' on file 1.
Processed 2 pages for database 'dbccpagetest_copy', file 'dbccpagetest_log' on file 1.

RESTORE DATABASE successfully processed 186 pages in 0.361 seconds (4.205 MB/sec).

PAGE: (1:158)

BUFFER:

BUF @0x02BE8D38

bpage = 0x03FB4000 bhash = 0x00000000 bpageno = (1:158)

bdbid = 10 breferences = 1 bUse1 = 38283

bstat = 0xc00009 blog = 0x159a2159 bnext = 0x00000000

PAGE HEADER:

Page @0x03FB4000

m_pageId = (1:158) m_headerVersion = 1 m_type = 1
m_typeFlagBits = 0x4 m_level = 0 m_flagBits = 0x8200
m_objId (AllocUnitId.idObj) = 68 m_indexId (AllocUnitId.idInd) = 256
Metadata: AllocUnitId = 72057594042384384
Metadata: PartitionId = 72057594038386688 Metadata: IndexId = 1
Metadata: ObjectId = 2073058421 m_prevPage = (1:157) m_nextPage = (1:159)
pminlen = 28 m_slotCnt = 245 m_freeCnt = 11
m_freeData = 7691 m_reservedCnt = 0 m_lsn = (24:453:8)
m_xactReserved = 0 m_xdesId = (0:0) m_ghostRecCnt = 0
m_tornBits = -1020457745

Allocation Status

GAM (1:2) = ALLOCATED SGAM (1:3) = NOT ALLOCATED

PFS (1:1) = 0x60 MIXED_EXT ALLOCATED 0_PCT_FULL DIFF (1:6) = NOT CHANGED

ML (1:7) = NOT MIN_LOGGED

Slot 0 Offset 0x60 Length 31

Record Type = PRIMARY_RECORD Record Attributes = NULL_BITMAP

Memory Dump @0x4542C060

00000000: 10001c00 d5030000 5bd30000 3f771101 †........[...?w..
00000010: b9980000 baa10a00 00000000 0500e0††††...............

UNIQUIFIER = [NULL]

Slot 0 Column 1 Offset 0x4 Length 4

salesID = 981

Slot 0 Column 2 Offset 0x8 Length 4

customerID = 54107

Slot 0 Column 3 Offset 0xc Length 8

salesDate = Jan 17 2007 4:35PM

Slot 0 Column 4 Offset 0x14 Length 8

salesAmount = 69.68

Slot 244 Offset 0x1dec Length 31

Record Type = PRIMARY_RECORD Record Attributes = NULL_BITMAP

Memory Dump @0x4542DDEC

00000000: 10001c00 c9040000 bfa10000 57771101 †............Ww..
00000010: b9980000 c6b80500 00000000 0500e0††††...............

UNIQUIFIER = [NULL]

Slot 244 Column 1 Offset 0x4 Length 4
salesID = 1225
Slot 244 Column 2 Offset 0x8 Length 4
customerID = 41407
Slot 244 Column 3 Offset 0xc Length 8
salesDate = Jan 17 2007 4:35PM
Slot 244 Column 4 Offset 0x14 Length 8
salesAmount = 37.50

DBCC execution completed. If DBCC printed error messages, contact your system administrator.
So we're looking at salesID range 981 to 1225 inclusive. Before we can copy the rows back to the damaged database, we need to get rid of the corrupt page. Repair should delete the page for us. First I'm going to take another backup though - just in case something goes wrong!

BACKUP DATABASE dbccpagetest TO DISK = 'C:\dbccpagetest_corrupt.bak' WITH INIT;

GO
ALTER DATABASE dbccpagetest SET SINGLE_USER;
GO

DBCC CHECKDB (dbccpagetest, REPAIR_ALLOW_DATA_LOSS) WITH NO_INFOMSGS;

GO
ALTER DATABASE dbccpagetest SET MULTI_USER;
GO

Processed 184 pages for database 'dbccpagetest', file 'dbccpagetest' on file 1.

Processed 4 pages for database 'dbccpagetest', file 'dbccpagetest_log' on file 1.

BACKUP DATABASE successfully processed 188 pages in 0.380 seconds (4.052 MB/sec).

Msg 8909, Level 16, State 1, Line 1

Table error: Object ID 0, index ID -1, partition ID 0, alloc unit ID 0 (type Unknown), page ID (1:158) contains an incorrect page ID in its page header. The PageId in the page header = (0:0).

The error has been repaired.

CHECKDB found 0 allocation errors and 1 consistency errors not associated with any single object.

CHECKDB fixed 0 allocation errors and 1 consistency errors not associated with any single object.

Repair: The Clustered index successfully rebuilt for the object "dbo.sales" in database "dbccpagetest".

Repair: The page (1:158) has been deallocated from object ID 2073058421, index ID 1, partition ID 72057594038386688, alloc unit ID 72057594042384384 (type In-row data).

Msg 8945, Level 16, State 1, Line 1

Table error: Object ID 2073058421, index ID 1 will be rebuilt.
 
The error has been repaired.

Msg 8928, Level 16, State 1, Line 1

Object ID 2073058421, index ID 1, partition ID 72057594038386688, alloc unit ID 72057594042384384 (type In-row data): Page (1:158) could not be processed. See other errors for details.

The error has been repaired.
Msg 8976, Level 16, State 1, Line 1

Table error: Object ID 2073058421, index ID 1, partition ID 72057594038386688, alloc unit ID 72057594042384384 (type In-row data). Page (1:158) was not seen in the scan although its parent (1:154) and previous (1:157) refer to it. Check any previous errors.

The error has been repaired.

Msg 8978, Level 16, State 1, Line 1

Table error: Object ID 2073058421, index ID 1, partition ID 72057594038386688, alloc unit ID 72057594042384384 (type In-row data). Page (1:159) is missing a reference from previous page (1:158). Possible chain linkage problem.

The error has been repaired.

CHECKDB found 0 allocation errors and 3 consistency errors in table 'sales' (object ID 2073058421).

CHECKDB fixed 0 allocation errors and 3 consistency errors in table 'sales' (object ID 2073058421).

CHECKDB found 0 allocation errors and 4 consistency errors in database 'dbccpagetest'.

CHECKDB fixed 0 allocation errors and 4 consistency errors in database 'dbccpagetest'.

We should check the row count to see that the count has dropped from the initial 5000 rows we inserted:

USE dbccpagetest;
GO
SELECT COUNT (*) FROM SALES;
GO
SELECT COUNT (*) FROM sales WHERE salesID > 980 AND salesID < 1226;
GO

And we're down to 4755 rows, as expected with zero rows in that range. All we need to do now is to copy the missing rows over from the restored copy. Remember, this will only work if you know that the data being salvaged hasn't changed since the backup was taken - otherwise you'll have old and new data mixed in the table which will play havoc with your business. Before we copy the rows, we know we're got an identity column we'd like to preserve so we set IDENTITY_INSERT on which tells the server not to generate new identity values for the inserted rows.

-- Make sure identity values survive.

SET IDENTITY_INSERT sales ON;
GO

-- Insert the missing rows.

SET NOCOUNT OFF;
GO
INSERT INTO sales (salesID, customerID, salesDate, salesAmount)
SELECT * FROM dbccpagetest_copy.dbo.sales AS R
WHERE R.salesID > 980 AND R.salesID < 1226;
GO

-- Restore identity behavior.

SET IDENTITY_INSERT sales OFF;
GO

(245 row(s) affected)

We copy over 245 rows and checking the row count again says we're back to 5000 rows.

SQL Server Service Startup Options

When you install SQL Server, Setup writes a set of default startup options in the Microsoft Windows registry. You can use these startup options to specify an alternate master database file, master database log file, or error log file.
Startup options can be set by using SQL Server Configuration Manager. For information, see How to: Configure Server Startup Options (SQL Server Configuration Manager).
Default startup optionsDescription
-d master_file_pathThe fully qualified path for the master database file (typically, C:\Program Files\Microsoft SQL Server\MSSQL.n\MSSQL\Data\master.mdf). If you do not provide this option, the existing registry parameters are used.
-e error_log_pathThe fully qualified path for the error log file (typically, C:\Program Files\Microsoft SQL Server\MSSQL.n\MSSQL\LOG\ERRORLOG). If you do not provide this option, the existing registry parameters are used.
-l master_log_pathThe fully qualified path for the master database log file (typically C:\Program Files\Microsoft SQL Server\MSSQL.n\MSSQL\Data\mastlog.ldf). If you do not specify this option, the existing registry parameters are used.
You can override the default startup options temporarily and start an instance of SQL Server by using the following additional startup options.
Other startup optionsDescription
-cShortens startup time when starting SQL Server from the command prompt. Typically, the SQL Server Database Engine starts as a service by calling the Service Control Manager. Because the SQL Server Database Engine does not start as a service when starting from the command prompt, use -c to skip this step.
-fStarts an instance of SQL Server with minimal configuration. This is useful if the setting of a configuration value (for example, over-committing memory) has prevented the server from starting. Starting SQL Server in minimal configuration mode places SQL Server in single-user mode. For more information, see the description for -m that follows.
-g memory_to_reserveSpecifies an integer number of megabytes (MB) of memory that SQL Server will leave available for memory allocations within the SQL Server process, but outside the SQL Server memory pool. The memory outside of the memory pool is the area used by SQL Server for loading items such as extended procedure .dll files, the OLE DB providers referenced by distributed queries, and automation objects referenced in Transact-SQL statements. The default is 256 MB.
Use of this option might help tune memory allocation, but only when physical memory exceeds the configured limit set by the operating system on virtual memory available to applications. Use of this option might be appropriate in large memory configurations in which the memory usage requirements of SQL Server are atypical and the virtual address space of the SQL Server process is totally in use. Incorrect use of this option can lead to conditions under which an instance of SQL Server may not start or may encounter run-time errors.
Use the default for the -g parameter unless you see any of the following warnings in the SQL Server error log:
  • "Failed Virtual Allocate Bytes: FAIL_VIRTUAL_RESERVE <size>"
  • "Failed Virtual Allocate Bytes: FAIL_VIRTUAL_COMMIT <size>"
These messages might indicate that SQL Server is trying to free parts of the SQL Server memory pool in order to find space for items such as extended stored procedure .dll files or automation objects. In this case, consider increasing the amount of memory reserved by the -g switch.
Using a value lower than the default will increase the amount of memory available to the buffer pool and thread stacks; this may, in turn, provide some performance benefit to memory-intensive workloads in systems that do not use many extended stored procedures, distributed queries, or automation objects.
-hReserves virtual address space for Hot Add memory metadata when AWE is enabled with 32-bit SQL Server. Required for Hot-Add memory with 32-bit AWE, but consumes about 500 MB of virtual address space and makes memory tuning more difficult. Not required for 64-bit SQL Server. Hot Add Memory is only available for Windows Server 2003, Enterprise and Datacenter editions. It also requires special hardware support from the hardware vendor.
-mStarts an instance of SQL Server in single-user mode. When you start an instance of SQL Server in single-user mode, only a single user can connect, and the CHECKPOINT process is not started. CHECKPOINT guarantees that completed transactions are regularly written from the disk cache to the database device. (Typically, this option is used if you experience problems with system databases that should be repaired.) Enables the sp_configure allow updates option. By default, allow updates is disabled. Starting SQL Server in single-user mode enables any member of the computer's local Administrators group to connect to the instance of SQL Server as a member of the sysadmin fixed server role. For more information, see Troubleshooting: Connecting to SQL Server When System Administrators Are Locked Out. For more information about single-user mode, see Starting SQL Server in Single-User Mode.
-m"Client Application Name"When you use the -m option with SQLCMD or SQL Server Management Studio, you can limit the connections to a specified client application. For example, -m"SQLCMD" limits connections to a single connection and that connection must identify itself as the SQLCMD client program. Use this option when you are starting SQL Server in single-user mode and an unknown client application is taking the only available connection. To connect through the Query Editor in Management Studio, use -m"Microsoft SQL Server Management Studio - Query".
Client Application Name is case sensitive.
Important
Do not use this option as a security feature. The client application provides the client application name, and can provide a false name as part of the connection string.
-nDoes not use the Windows application log to record SQL Server events. If you start an instance of SQL Server with -n, we recommend that you also use the -e startup option. Otherwise, SQL Server events are not logged.
-sAllows you to start a named instance of SQL Server. Without the -s parameter set, the default instance will try to start. You must switch to the appropriate BINN directory for the instance at a command prompt before starting sqlservr.exe. For example, if Instance1 were to use \mssql$Instance1 for its binaries, the user must be in the \mssql$Instance1\binn directory to start sqlservr.exe -s instance1.
-T trace#Indicates that an instance of SQL Server should be started with a specified trace flag (trace#) in effect. Trace flags are used to start the server with nonstandard behavior. For more information, see Trace Flags (Transact-SQL).
-xDisables the following monitoring features:
  • SQL Server performance monitor counters
  • Keeping CPU time and cache-hit ratio statistics
  • Collecting information for the DBCC SQLPERF command
  • Collecting information for some dynamic management views
  • Many extended-events event points
Caution
When you use the –x startup option, the information that is available for you to diagnose performance and functional problems with SQL Server is greatly reduced.
-EIncreases the number of extents that are allocated for each file in a filegroup. This option may be helpful for data warehouse applications that have a limited number of users running index or data scans. It should not be used in other applications because it might adversely affect performance. This option is not supported in 32-bit releases of SQL Server.
Note
When specifying a trace flag with the -T option, use an uppercase "T" to pass the trace flag number. A lowercase "t" is accepted by SQL Server, but this sets other internal trace flags that are required only by SQL Server support engineers. (Parameters specified in the Control Panel startup window are not read.)
Using Startup Options for Troubleshooting

Some startup options such as single-user mode and minimal configuration mode are principally used during troubleshooting. Starting the server for troubleshooting with the –m or –f options is most easily done at the command line, while manually starting sqlservr.exe.
Note
When SQL Server is started by using net start, startup options use a slash (/) instead of a hyphen (-).
Using Startup Options During Normal Operations

You may want to use some startup options every time you start SQL Server. These options, such as –g or starting with a trace flag, are most easily done by configuring the startup parameters by using SQL Server Management Studio or SQL Server Configuration Manager. These tools save the startup options as registry keys, enabling SQL Server to always start with the startup options

Sunday, November 24, 2019

Replication Questions!!

requently Asked Questions for Replication Administrators
The following questions and answers provide guidance on a variety of tasks faced by administrators of replicated databases.


Configuring Replication

________________________________________

Does activity need to be stopped on a database when it is published?

No. Activity can continue on a database while a publication is being created. Be aware that producing a snapshot can be resource-intensive, so it is best to generate snapshots during periods of lower activity on the database (by default a snapshot is generated when you complete the New Publication Wizard).

Are tables locked during snapshot generation?

The length of time that the locks are taken depends on the type of replication used:

• For merge publications, the Snapshot Agent does not take any locks.

• For transactional publications, by default the Snapshot Agent takes locks only during the initial phase of snapshot generation.

• For snapshot publications the Snapshot Agent takes locks during the entire snapshot generation process.

Because locks prevent other users from updating the tables, the Snapshot Agent should be scheduled to execute during periods of lower activity on the database, especially for snapshot publications.

When is a subscription available; when can the subscription database be used?

A subscription is available after the snapshot has been applied to the subscription database. Even though the subscription database is accessible prior to this, the database should not be used until after the snapshot has been applied. Use Replication Monitor to check the status of snapshot generation and application:

• The snapshot is generated by the Snapshot Agent. View the status of snapshot generation on the Agents tab for a publication in Replication Monitor. For more information, see How to: View Information and Perform Tasks for the Agents Associated With a Publication (Replication Monitor).

• The snapshot is applied by the Distribution Agent or Merge Agent. View the status of snapshot application in the Distribution Agent or Merge Agent page of Replication Monitor. For more information, see How to: View Information and Perform Tasks for the Agents Associated With a Subscription (Replication Monitor).

What happens if the Snapshot Agent has not completed when the Distribution or Merge Agent starts?

It will not cause an error if the Distribution Agent or Merge Agent runs at the same time as the Snapshot Agent. However, you must be aware of the following:

• If the Distribution Agent or Merge Agent is configured to run continuously, the agent applies the snapshot automatically after the Snapshot Agent completes.

• If the Distribution Agent or Merge Agent is configured to run on a schedule or on-demand, and there is no snapshot available when the agent runs, the agent will shut down with a message stating that a snapshot is not yet available. You must run the agent again to apply the snapshot after the Snapshot Agent has completed. For more information on running agents, see How to: Synchronize a Push Subscription (SQL Server Management Studio), How to: Synchronize a Pull Subscription (SQL Server Management Studio), and Replication Agent Executables Concepts.

Should I script my replication configuration?

Yes. Scripting the replication configuration is a key part of any disaster recovery plan for a replication topology. For more information on scripting, see Scripting Replication.

What recovery model is required on a replicated database?

Replication functions properly using any of the recovery models: simple, bulk-logged, or full. Merge replication tracks change by storing information in metadata tables. Transactional replication tracks changes by marking the transaction log, but this marking process is not affected by the recovery model.

Why does replication add a column to replicated tables; will it be removed if the table isn't published?

To track changes, merge replication and transactional replication with queued updating subscriptions must be able to uniquely identify every row in every published table. To accomplish this:

• Merge replication adds the column rowguid to every table, unless the table already has a column of data type uniqueidentifier with the ROWGUIDCOL property set (in which case this column is used). If the table is dropped from the publication, the rowguid column is removed; if an existing column was used for tracking, the column is not removed.

• If a transactional publication supports queued updating subscriptions, replication adds the column msrepl_tran_version to every table. If the table is dropped from the publication, the msrepl_tran_version column is not removed.

How do I manage constraints on published tables?

There are a number of issues to consider regarding constraints on published tables:

• Transactional replication requires a primary key constraint on each published table. Merge replication does not require a primary key, but if one is present, it must be replicated. Snapshot replication does not require a primary key.

• By default, primary key constraints, indexes, and check constraints are replicated to Subscribers.

• The NOT FOR REPLICATION option is specified by default for foreign key constraints and check constraints; the constraints are enforced for user operations but not agent operations. For more information, see Controlling Constraints, Identities, and Triggers with NOT FOR REPLICATION.

For information on setting the schema options that control whether constraints are replicated, see How to: Specify Schema Options (SQL Server Management Studio) and How to: Specify Schema Options (Replication Transact-SQL Programming).

How do I manage identity columns?

Replication provides automatic identity range management for replication topologies that include updates at the Subscriber. For more information, see Replicating Identity Columns.

Can the same objects be published in different publications?

Yes, but with some restrictions. For more information, see the section "Publishing Tables in More Than One Publication" in the topic Publishing Data and Database Objects.

Can multiple publications use the same distribution database?

Yes. There are no restrictions on the number or types of publications that can use the same distribution database. All publications from a given Publisher must use the same Distributor and distribution database.

If you have multiple publications, you can configure multiple distribution databases at the Distributor to ensure that the data flowing through each distribution database is from a single publication. Use the Distributor Properties dialog box or sp_adddistributiondb (Transact-SQL) to add a distribution database. For more information about accessing the dialog box, see How to: View and Modify Distributor Properties (SQL Server Management Studio).

How do I find information on the Distributor and Publisher, such as which objects in a database are published?

This information is available through SQL Server Management Studio, and a number of replication stored procedures. For information, see Replication Properties and Distributor and Publisher Information Script.

Does replication encrypt data?

No. Replication does not encrypt data that is stored in the database or transferred over the network. For more information, see the "Encryption" section of the topic Security Overview (Replication).

How do I replicate data over the Internet?

Replicate data over the Internet using:

• A Virtual Private Network (VPN). For more information, see Publishing Data over the Internet Using VPN.

• The Web synchronization option for merge replication. For more information, see Web Synchronization for Merge Replication.

All types of Microsoft SQL Server replication can replicate data over a VPN, but you should consider Web synchronization if you are using merge replication.

Does replication resume if a connection is dropped

Yes. Replication processing resumes at the point at which it left off if a connection is dropped. If you are using merge replication over an unreliable network, consider using logical records, which ensures related changes are processed as a unit. For more information, see Grouping Changes to Related Rows with Logical Records.

Does replication work over low bandwidth connections? Does it use compression?

Yes, replication does work over low bandwidth connections. For connections over TCP/IP, it uses the compression provided by the protocol but does not provide additional compression. For Web synchronization connections over HTTPS, it uses the compression provided by the protocol and also additional compression of the XML files used to replicate changes. For more information about replicating over low bandwidth connections, see A Slow Network Is Causing Problems.

Logins and Object Ownership



Are logins and passwords replicated?

No. You could create a DTS package to transfer logins and passwords from a Publisher to one or more Subscribers. For more information, see Designing and Implementing Packages (Integration Services).

What are schemas and how are they replicated?

Beginning with Microsoft SQL Server 2005, schema has two meanings:

• The definition of an object, such as a CREATE TABLE statement. By default, replication copies the definitions of all replicated objects to the Subscriber.

• The namespace within which an object is created: ..

. Schemas are defined using the CREATE SCHEMA statement. For moreinformation on schemas, see Schemas (Database Engine). • Replication has the following default behavior in the New Publication Wizard with respect to schemas and object ownership: • For articles in merge publications with a compatibility level of 90 or higher, snapshot publications, and transactional publications: by default, the object owner at the Subscriber is the same as the owner of the corresponding object at the Publisher. If the schemas that own objects do not exist at the Subscriber, they are created automatically. • For articles in merge publications with a compatibility level lower than 90: by default, the owner is left blank and is specified as dbo during the creation of the object on the Subscriber. • For articles in Oracle publications: by default, the owner is specified as dbo. • For articles in publications that use character mode snapshots (which are used for non-SQL Server Subscribers and SQL Server Compact 3.5 SP1 Subscribers): by default, the owner is left blank. The owner defaults to the owner associated with the account used by the Distribution Agent or Merge Agent to connect to the Subscriber. The object owner can be changed through the Article Properties - dialog box and through the following stored procedures: sp_addarticle, sp_addmergearticle, sp_changearticle, and sp_changemergearticle. For more information, see How to: View and Modify Publication and Article Properties (SQL Server Management Studio), How to: Define an Article (Replication Transact-SQL Programming), and How to: View and Modify Article Properties (Replication Transact-SQL Programming). How can grants on the subscription database be configured to match grants on the publication database?

By default, replication does not execute GRANT statements on the subscription database. If you want the permissions on the subscription database to match those on the publication database, use one of the following methods: • Execute GRANT statements at the subscription database directly. • Use a post-snapshot script to execute the statements. For more information, see Executing Scripts Before and After the Snapshot Is Applied. • Use the stored procedure sp_addscriptexec to execute the statements. What happens to permissions granted in a subscription database if a subscription is reinitialized? By default, objects at the Subscriber are dropped and recreated when a subscription is reinitialized, which causes all granted permissions for those objects to be dropped. There are two ways to handle this: • Reapply the grants after the reinitialization using the techniques described in the previous section. • Specify that objects should not be dropped when the subscription is reinitialized. Prior to reinitialization, either: o Execute sp_changearticle or sp_changemergearticle. Specify a value of 'pre_creation_cmd' (sp_changearticle) or 'pre_creation_command' (sp_changemergearticle) for the parameter @property and a value of 'none', 'delete' or 'truncate' for the parameter @value. o In the Article Properties - dialog box in the Destination Object section, select a value of Keep existing object unchanged, Delete data. If article has a row filter, delete only data that matches the filter. or Truncate all data in the existing object for the option Action if name is in use. For more information on accessing this dialog box, see How to: View and Modify Publication and Article Properties (SQL Server Management Studio). Database Maintenance ________________________________________
Why can't I run TRUNCATE TABLE on a published table?

TRUNCATE TABLE is a non-logged operation that does not fire triggers. It is not permitted because replication cannot track the changes caused by the operation: transactional replication tracks changes through the transaction log; merge replication tracks changes through triggers on published tables. What is the effect of running a bulk insert command on a replicated database? For transactional replication, bulk inserts are tracked and replicated like other inserts. For merge replication, you must ensure that change tracking metadata is updated properly. For more information, see the section "Bulk Inserting Data into Published Tables" in Considerations for Merge Replication. Are there any replication considerations for backup and restore? Yes. There are a number of special considerations for databases that are involved in replication. For more information, see Backing Up and Restoring Replicated Databases. Does replication affect the size of the transaction log? Merge replication and snapshot replication do not affect transaction log size, but transactional replication can. If a database includes one or more transactional publications, the log is not truncated until all transactions relevant to the publications have been delivered to the distribution database. If the transaction log is growing too large, and the Log Reader Agent is running on a scheduled basis, consider shortening the interval between runs. Or, set it to run in continuous mode. If it is set to run in continuous mode (the default), ensure that it is running. For more information on checking Log Reader Agent status, see How to: View Information and Perform Tasks for the Agents Associated With a Publication (Replication Monitor). Additionally, if you have set the option 'sync with backup' on the publication database or distribution database, the transaction log is not truncated until all transactions have been backed up. If the transaction log is growing too large, and you have this option set, consider shortening the interval between transaction log backups. For more information on backing up and restoring databases involved in transactional replication, see Strategies for Backing Up and Restoring Snapshot and Transactional Replication. How do I rebuild indexes or tables in replicated databases? There are a variety of mechanisms for rebuilding indexes. They can all be used with no special considerations for replication, with the following exception: primary keys are required on tables in transactional publications, so you cannot drop and recreate primary keys on these tables. How do I add or change indexes on publication and subscription databases? Indexes can be added at the Publisher or Subscribers with no special considerations for replication (be aware that indexes can affect performance). CREATE INDEX and ALTER INDEX are not replicated, so if you add or change an index at, for example, the Publisher, you must make the same addition or change at the Subscriber if you want it reflected there. How do I move or rename files for databases involved in replication? In versions of SQL Server prior to SQL Server 2005, moving or renaming database files required detaching and reattaching the database. Because a replicated database cannot be detached, replication had to be removed from these databases first. Beginning with SQL Server 2005, you can move or rename files without detaching and re-attaching the database, with no effect on replication. For more information about moving and renaming files, see ALTER DATABASE (Transact-SQL).

How do I drop a table that is being replicated? First drop the article from the publication using sp_droparticle, sp_dropmergearticle, or the Publication Properties - dialog box, and then drop it from the