Thursday, July 21, 2011

Index Rebuilding vs Index Reorganizing in SQL Server

In SQL Server 2005, one can choose to either rebuild the index or re-organize the index. There are differences in what these commands do and it is important to understand those differences. At a very simple level, here is how you would re-organize or rebuild an index:
CREATE TABLE DECIPHER_DATA (COL1 INT, COL2 NVARCHAR(10));
CREATE UNIQUE INDEX DECIPHER_DATA_IND_1 ON DECIPHER_DATA (COL2);
ALTER INDEX DECIPHER_DATA_IND_1 ON DECIPHER_DATA REORGANIZE;
ALTER INDEX DECIPHER_DATA_IND_1 ON DECIPHER_DATA REBUILD;
Here are the differences between the two:
1) Index rebuild works by re-creating the index internally again and when that has been achieved, it drops the existing index where as index reorganize is the process of physically re-organizing the leaf nodes of the index.
2) During the index rebuild process, the statistics are also re-computed – same as when a new index gets created. Reorganize on the other hand does not update the statistics. Reorganize essentially just swaps one page with another and thus does not require free space for this operation like rebuild does. Infact, reorganize can free up some pages as it does the reorg in two phases – compaction and defrag. A reorganize can remove almost all of the logical fragmentation but it cannot necessarily fix extent fragmentation in which the previous and the next extents are physically contiguous.
3) Another point to note is that an index (clustered or non-clustered) cannot be built online if it contains LOB data (text, ntext, varchar(max), nvarchar(max), varbinary(max), image and xml data type columns). The ALTER INDEX…REORGANIZE command shown above is the same as DBCC INDEXDEFRAG but there is one difference. ALTER INDEX…REORGANIZE has some additional features like large objects compaction (LOB_COMPACTION). And this is an online operation.
4) Regarding partitions of an index, if an index has multiple partitions, then you cannot rebuild a single partition online. You can reorganize a single index partition online. If you want to rebuild an index with multiple partitions in an online environment, you need to rebuild the entire index which means rebuilding all the partitions.
So, how frequently should one do the rebuild/reorganize? Like many answers in the IT field, it depends :-) It depends on the fillfactor, it depends upon the amount of the data that is changed between the rebuild/reorganize operations and it depends upon what logical fragmentation value you consider to be the threshold for forcing these operations.
An additional question that was raised by one of our colleagues was whether the statistics on non-indexed columns also get re-computed when a rebuild is done? He was talking about the auto create statistics (the ones that you would have seen with the names like _WA_sys_xxxx) or the ones that are explicitly created by using the create statistics command. If we are rebuilding an index, does it make sense to also rebuild those at the same time especially if there is any co-relation between them? Does that happen automatically upon a rebuild? The answer is no. It cannot happen automatically since the co-relation is not stored anywhere and those statistics are stored separately from those indexes. In SQL Server 2008, there is a DATE_CORRELATION_OPTIMIZATION database SET option which can help improve the performance of those queries in which 2 tables are in an inner join condition and whose date/datetime data-type columns are co-related example: PO_HDR might have ORDER_DATE and PO_DTL might have PACK_DATE, SHIP_DATE, DUE_DATE etc.. I will check to see whether a rebuild in that case forces the re-build on the co-related index as well and if no index exists, whether the stats are re-computed on those co-related columns if this option is on. Will post our results here once we are done with my tests.

Types of Locks

 

 Shared Locks

Shared (S) locks allow concurrent transactions to read (SELECT) a resource under pessimistic concurrency control. For more information, see Types of Concurrency Control. No other transactions can modify the data while shared (S) locks exist on the resource. Shared (S) locks on a resource are released as soon as the read operation completes, unless the transaction isolation level is set to repeatable read or higher, or a locking hint is used to retain the shared (S) locks for the duration of the transaction.

Update Locks

Update (U) locks prevent a common form of deadlock. In a repeatable read or serializable transaction, the transaction reads data, acquiring a shared (S) lock on the resource (page or row), and then modifies the data, which requires lock conversion to an exclusive (X) lock. If two transactions acquire shared-mode locks on a resource and then attempt to update data concurrently, one transaction attempts the lock conversion to an exclusive (X) lock. The shared-mode-to-exclusive lock conversion must wait because the exclusive lock for one transaction is not compatible with the shared-mode lock of the other transaction; a lock wait occurs. The second transaction attempts to acquire an exclusive (X) lock for its update. Because both transactions are converting to exclusive (X) locks, and they are each waiting for the other transaction to release its shared-mode lock, a deadlock occurs.
To avoid this potential deadlock problem, update (U) locks are used. Only one transaction can obtain an update (U) lock to a resource at a time. If a transaction modifies a resource, the update (U) lock is converted to an exclusive (X) lock.

Exclusive Locks

Exclusive (X) locks prevent access to a resource by concurrent transactions. With an exclusive (X) lock, no other transactions can modify data; read operations can take place only with the use of the NOLOCK hint or read uncommitted isolation level.
Data modification statements, such as INSERT, UPDATE, and DELETE combine both modification and read operations. The statement first performs read operations to acquire data before performing the required modification operations. Data modification statements, therefore, typically request both shared locks and exclusive locks. For example, an UPDATE statement might modify rows in one table based on a join with another table. In this case, the UPDATE statement requests shared locks on the rows read in the join table in addition to requesting exclusive locks on the updated rows.

Intent Locks

The Database Engine uses intent locks to protect placing a shared (S) lock or exclusive (X) lock on a resource lower in the lock hierarchy. Intent locks are named intent locks because they are acquired before a lock at the lower level, and therefore signal intent to place locks at a lower level.
Intent locks serve two purposes:
  • To prevent other transactions from modifying the higher-level resource in a way that would invalidate the lock at the lower level.
  • To improve the efficiency of the Database Engine in detecting lock conflicts at the higher level of granularity.
For example, a shared intent lock is requested at the table level before shared (S) locks are requested on pages or rows within that table. Setting an intent lock at the table level prevents another transaction from subsequently acquiring an exclusive (X) lock on the table containing that page. Intent locks improve performance because the Database Engine examines intent locks only at the table level to determine if a transaction can safely acquire a lock on that table. This removes the requirement to examine every row or page lock on the table to determine if a transaction can lock the entire table.
Intent locks include intent shared (IS), intent exclusive (IX), and shared with intent exclusive (SIX).

Intent shared (IS)

Protects requested or acquired shared locks on some (but not all) resources lower in the hierarchy.

Intent exclusive (IX)

Protects requested or acquired exclusive locks on some (but not all) resources lower in the hierarchy. IX is a superset of IS, and it also protects requesting shared locks on lower level resources.

Shared with intent exclusive (SIX)

Protects requested or acquired shared locks on all resources lower in the hierarchy and intent exclusive locks on some (but not all) of the lower level resources. Concurrent IS locks at the top-level resource are allowed. For example, acquiring a SIX lock on a table also acquires intent exclusive locks on the pages being modified and exclusive locks on the modified rows. There can be only one SIX lock per resource at one time, preventing updates to the resource made by other transactions, although other transactions can read resources lower in the hierarchy by obtaining IS locks at the table level.

Intent update (IU)

Protects requested or acquired update locks on all resources lower in the hierachy. IU locks are used only on page resources. IU locks are converted to IX locks if an update operation takes place.

Shared intent update (SIU)

A combination of S and IU locks, as a result of acquiring these locks separately and simultaneously holding both locks. For example, a transaction executes a query with the PAGLOCK hint and then executes an update operation. The query with the PAGLOCK hint acquires the S lock, and the update operation acquires the IU lock.

Update intent exclusive (UIX)

A combination of U and IX locks, as a result of acquiring these locks separately and simultaneously holding both locks.

Schema Locks

The Database Engine uses schema modification (Sch-M) locks during a table data definition language (DDL) operation, such as adding a column or dropping a table. During the time that it is held, the Sch-M lock prevents concurrent access to the table. This means the Sch-M lock blocks all outside operations until the lock is released.
Some data manipulation language (DML) operations, such as table truncation, use Sch-M locks to prevent access to affected tables by concurrent operations.
The Database Engine uses schema stability (Sch-S) locks when compiling and executing queries. Sch-S locks do not block any transactional locks, including exclusive (X) locks. Therefore, other transactions, including those with X locks on a table, continue to run while a query is being compiled. However, concurrent DDL operations, and concurrent DML operations that acquire Sch-M locks, cannot be performed on the table.

Bulk Update Locks

The Database Engine uses bulk update (BU) locks when bulk copying data into a table, and either the TABLOCK hint is specified or the table lock on bulk load table option is set using sp_tableoption. Bulk update (BU) locks allow multiple threads to bulk load data concurrently into the same table while preventing other processes that are not bulk loading data from accessing the table.

Key-Range Locks

Key-range locks protect a range of rows implicitly included in a record set being read by a Transact-SQL statement while using the serializable transaction isolation level. Key-range locking prevents phantom reads. By protecting the ranges of keys between rows, it also prevents phantom insertions or deletions into a record set accessed by a transaction.

Questions and Answers

Question: What happens if a query tries to read data from a row that is currently locked by an exclusive (X) lock? Answer: Even though the lock is exclusive, a SELECT statement will still be able to read data from the locked row.

Useful Links

For more information on lock modes, see "Lock Modes", http://go.microsoft.com/fwlink/?LinkID=126237

What are Deadlocks?



A deadlock occurs when two transactions have locks on separate objects and each transaction requests a lock on the other transaction’s object. For example:
  • Transaction A holds a share lock on row 1.
  • Transaction B holds a share lock on row 2.
  • Transaction A requests an exclusive lock on row 2, but it cannot be granted until Transaction B releases the share lock.
  • Transaction B requests an exclusive lock on row 1, but it cannot be granted until Transaction A releases the share lock.
  • Each transaction must wait for the other to release the lock.
A deadlock can occur when several long-running transactions execute concurrently in the same database. A deadlock also can occur as a result of the order in which the optimizer processes a complex query, such as a join, in which you cannot necessarily control the order of processing.

How SQL Sever Ends a Deadlock

SQL Server ends a deadlock by automatically terminating one of the transactions. The process SQL Server uses is in the following list:
  • Rolls back the transaction of the deadlock victim.
  • In a deadlock, SQL Server gives priority to the transaction that has been processing the longest; that transaction prevails. SQL Server rolls back the transaction with the least amount of time invested.
  • Notifies the deadlock victim’s application (with message number 1205).
  • Cancels the deadlock victim’s current request.
  • Allows the other transaction to continue.
Important: In a multiuser environment, each client should check regularly for message number 1205, which indicates that the transaction was rolled back. If message 1205 is found, the application should attempt the transaction again.

Guidelines for Minimizing Deadlocks

  • While it is not always possible to eliminate deadlocks, you can reduce the risk of a deadlock by observing the following guidelines:
  • Use resources in the same sequence in all transactions. For example, in all transactions that reference more than one table, reference the tables in the same order whenever possible.
  • Shorten transactions by minimizing the number of steps.
  • Shorten transaction times by avoiding queries that affect many rows.
  • Set the DEADLOCK_PRIORITY setting to LOW. When this setting is set and the process encounters a deadlock, it will be chosen as the deadlock victim. This is useful for online analytical processing (OLAP) database or reporting processes that you do not want to interfere with higher-priority OLTP processes.

Lock Timeout

If a transaction becomes locked while waiting for a resource and a deadlock results, SQL Server will terminate one of the participating transactions with no timeout.
If no deadlock occurs, SQL Server blocks the transaction requesting the lock until the other transaction releases the lock. By default, there is no mandatory timeout period that SQL Server observes. The only way to test whether a resource that you want to lock is already locked is to attempt to access the data, which could result in getting locked indefinitely.
The LOCK_TIMEOUT setting allows an application to set a maximum time that a statement waits on a blocked resource before the blocked statement is automatically cancelled. The cancellation does not roll back or cancel the transaction. The application must trap the error to handle the timeout situation and take remedial action, such as resubmitting the transaction or rolling it back.
The following example shows how to set the lock timeout. The timeout_period is the number of milliseconds that SQL Server will wait before returning a locking error.
SET LOCK_TIMEOUT timeout_period

The KILL command terminates a user process based on the server process ID (spid).
KILL {spid | UOW} [WITH STATUSONLY]

Questions and Answers

Question: Have you experienced deadlocking problems in your current environment? If so, how did you determine that deadlocks were a problem, and how was it resolved?

Useful Links

For more information on deadlocks, see "Deadlocking", http://go.microsoft.com/fwlink/?LinkID=126243

Transaction Logging

Transaction Logging


Every SQL Server 2008 database has a transaction log that records all transactions and the database modifications made by each transaction. The transaction log is a critical component of the database and, if there is a system failure, the transaction log might be required to bring your database back to a consistent state. The transaction log should never be deleted or moved unless you fully understand the ramifications of doing this.

Operations Supported by the Transaction Log

The transaction log supports the following operations:
  • Recovery of individual transactions.
    If an application issues a ROLLBACK statement, or if the Database Engine detects an error such as the loss of communication with a client, the log records are used to roll back the modifications made by an incomplete transaction.
  • Recovery of all incomplete transactions when SQL Server is started.
    If a server that is running SQL Server fails, the databases may be left in a state where some modifications were never written from the buffer cache to the data files, and there may be some modifications from incomplete transactions in the data files. When an instance of SQL Server is started, it runs a recovery of each database. Every modification recorded in the log which may not have been written to the data files is rolled forward. Every incomplete transaction found in the transaction log is then rolled back to make sure the integrity of the database is preserved.
  • Rolling a restored database, file, filegroup, or page forward to the point of failure.
    After a hardware loss or disk failure affecting the database files, you can restore the database to the point of failure. You first restore the last full database backup and the last differential database backup, and then restore the subsequent sequence of the transaction log backups to the point of failure. As you restore each log backup, the Database Engine reapplies all the modifications recorded in the log to roll forward all the transactions. When the last log backup is restored, the Database Engine then uses the log information to roll back all transactions that were not complete at that point.
  • Supporting transactional replication.
    The Log Reader Agent monitors the transaction log of each database configured for transactional replication and copies the transactions marked for replication from the transaction log into the distribution database.
  • Supporting standby-server solutions.
    The standby-server solutions, database mirroring, and log shipping, rely heavily on the transaction log. In a log shipping scenario, the primary server sends the active transaction log of the primary database to one or more destinations. Each secondary server restores the log to its local secondary database.
    In a database mirroring scenario, every update to a database, the principal database, is immediately reproduced in a separate, full copy of the database, the mirror database. The principal server instance sends each log record immediately to the mirror server instance which applies the incoming log records to the mirror database, continually rolling it forward.

Transaction Log Characteristics

Following are the characteristics of the SQL Server Database Engine transaction log:
  • The transaction log is implemented as a separate file or set of files in the database. The log cache is managed separately from the buffer cache for data pages, which results in simple, fast, and robust code within the Database Engine.
  • The format of log records and pages is not constrained to follow the format of data pages.
  • The transaction log can be implemented in several files. The files can be defined to expand automatically by setting the FILEGROWTH value for the log. This reduces the potential of running out of space in the transaction log, while at the same time reducing administrative overhead.
  • The mechanism to reuse the space within the log files is quick and has minimal effect on transaction throughput.

Questions and Answers

Question: How can you roll a restored database, file, filegroup, or page forward to the point of failure?
Answer: After a hardware loss or disk failure affecting the database files, you can restore the database to the point of failure. You first restore the last full database backup and the last differential database backup, and then restore the subsequent sequence of the transaction log backups to the point of failure. As you restore each log backup, the Database Engine reapplies all the modifications recorded in the log to roll forward all the transactions. When the last log backup is restored, the Database Engine then uses the log information to roll back all transactions that were not complete at that point.

Monday, July 18, 2011

Transaction Log Management in SQL Server : Managing the Log in Full Recovery Mode

The Series

This article is part of the Stairway Series: Stairway to Transaction Log Management in SQL Server
When things are going well, there is no need to be particularly conscious of what the Transaction log does or how it works. You just need to be confident that every database has the correct backup regime in place. When things go wrong, an understanding of the transaction log is important for taking corrective action, pasrticularly when a point-in-time restore of a database is required, urgently! Tony Davis give just the right level of detail that every DBA should know.
In this Level we'll review why and how to take log backups when working in FULL recovery mode, and how to perform a database restore using these log backup files, in conjunction with a full database backup. FULL recovery mode supports database restore to any point in time within an available log backup and, assuming a tail log backup can be made, right up to the time of the last committed transaction, before the failure occurred.

What gets Logged?

In FULL recovery mode, all operations are fully logged. For INSERT, UPDATE and DELETE operations, this means that for every row that is modified, there will be a log record describing the ID of the transaction that performed the statement, when that transaction started and ended, which pages were changed, the data changes that were made, and so on.
Operations that can be minimally logged SELECT INTO, BULK INSERT and CREATE INDEX, are still fully logged when working in FULL recovery mode, but it is done slightly differently. The rows affected by those operations are not logged individually; instead only the database pages get logged, as they get filled. This reduces the logging overheard of such operations, while making sure that there still exists all the same information exists that is needed to perform rollback, redo and point in time restores. Kalen Delaney has published some investigations into logging for SELECT INTO (http://sqlblog.com/blogs/kalen_delaney/archive/2011/03/15/what-gets-logged-for-select-into.aspx) and index rebuild (http://sqlblog.com/blogs/kalen_delaney/archive/2011/03/08/what-gets-logged-for-index-rebuilds.aspx) operations, both in FULL and BULK_LOGGED recovery modes. The differences in logging of minimally-logged operations, when working in BULK_LOGGED mode, are discussed in more detail in Level 6Managing the Log in BULK LOGGED Recovery Mode.

Why Backup the Transaction Log?

In FULL recovery mode, only a log backup can cause truncation of the log. As such, the transaction log will hold a full and complete record of the transactions performed since the last time the transaction log was backed up. Since all operations are fully logged, the log file can grow very large, very quickly, in busy systems.
Therefore, when working in FULL recovery mode, it is vital that you perform regular transaction log backups, in addition to full backups and, optionally, differential backups. Many novice or part-time DBAs perform full backups on their databases, but they don't perform transaction log backups. As a result, the transaction log is not truncated, and it grows and grows until the drive it is on runs out of disk space, causing SQL Server to stop working.
Truncation of the log will occur as soon as the log backup is taken, assuming that a checkpoint has occurred since the previous backup and that no other factors are delaying truncation, such as a data backup or restore operation. For a full list of factors that may delay truncation of recoverable VLFs, as well as factors that keep large swathes of the log active that otherwise wouldn't need to be, such as a rogue, long-running uncommitted transaction or database mirroring or replication processes, see: http://msdn.microsoft.com/en-gb/library/ms345414.aspx.

COPY_ONLY backups of the transaction log

COPY_ONLY backups of the transaction log don't truncate the transaction log. A COPY_ONLY log backup exists "independently" of the normal log backup scheme; it does not break the log backup chain.
In short, transaction log backups perform the dual purpose of allowing restore and recovery to a previous point in time, as well as controlling the size of the transaction log. Probably the most common cause of transaction log-related issues is working in FULL recovery mode and simply not taking log backups, or not taking log backups frequently enough to control the size of the transaction log file.
If you are unsure whether or not transaction log backups are being taken on a given database, then you can simply interrogate the backupset table in the MSDB database, using a query similar to that shown in Listing 5.1.
USE msdb ;
SELECT   backup_set_id ,
         backup_start_date ,
         backup_finish_date ,
         backup_size ,
         recovery_model ,
         [type]
FROM     dbo.backupsetWHERE    database_name = 'TestDB'
Listing 5.1: Are log backups being taken?
In the type column, a D represents a database backup, L a log backup and I a differential backup.
Note that since the data in this backupset table could be manipulated without affecting backup and restore behavior, you might want to verify your findings from this query, by querying sys.database_recovery_status to see the value of last_log_backup_lsn (see Listing 3.5), or the sys.databases table to see the value of log_reuse_wait_desc (will return LOG_BACKUP if a backup is required).

How to Back up the Transaction Log

As discussed previously, it is not possible to perform a transaction log backup without first taking at least one full backup. In fact, if you have a database that is in FULL recovery mode, but has never been backed up, then it will not actually be working in FULL recovery mode. The database will be in auto-truncate mode until the first full backup is performed.
All database backups, full, log or otherwise, are performed using the BACKUP command. The command accepts numerous options, which are documented here: http://msdn.microsoft.com/en-us/library/ms186865.aspx. However, at its most basic, which is often how it's used, the command to perform a full backup to disk is as follows:
BACKUP DATABASE DatabaseNameTO DISK ='FileLocation\DatabaseName.bak';
If this were the first backup to be performed, the DatabaseName.bak file would be created in the specified directory. If such a file already existed, then the default behavior is to append subsequent backups to that file. To override this behavior, and stipulate that any existing file should be overwritten, we can use the INIT option, as follows:
BACKUP DATABASE DatabaseNameTO DISK ='FileLocation\DatabaseName.bak'WITH INIT;
Most commonly, however, each subsequent backup is given a unique name; more on this in the forthcoming section, Restore to Point of failure.
After each regular (e.g. daily) full backup, there will be frequent (e.g. hourly) log backups, the basic command for which is very similar:
BACKUP LOG DatabaseNameTO DISK ='FileLocation\DatabaseName_Log.bak';

Storing Log Backups

Clearly the backed up data and log files should not be stored on the same drive that hosts the live files. If that drive suffers hardware failure then all your copies will be lost along with the live files, and the backups will have been in vain. The files should be backed up to a separate device, or backed up to a local, mirrored drive.

Frequency of Log Backups

As noted in previous Levels, you may be taking log backups every 15 minutes, or perhaps even more frequently. In such cases, in order to avoid the need to restore a huge number of transaction log files, you may choose to adopt a backup scheme consisting of full backups interspersed with differential backups, interspersed with transaction log backups.
In reality, the backup scheme is often more of a compromise between the ideal and the practical, between an assessment of the true risk of data loss, and what it will cost the company, and the cost involved in mitigating that risk. Many very important business applications use somewhat simpler, but nevertheless rigorous, backup schemes, perhaps involving regularly nightly full backups coupled with hourly transaction log backups.
The frequency of log backups may also be dictated by the number of transactions to which the database is subject. For very busy databases, it may be necessary to backup frequently in order to control the size of the log.
There is no easy way to calculate how often to take log backups. Most DBAs will take their best estimate at how often log backups should be taken, then observe the growth characteristics of the files and then adjust the backup scheme as necessary to prevent them from getting oversized.

The Log Chain and how to break it

As noted, it is not possible to perform a transaction log backup without first taking at least one full backup. In order to recover a database to a point in time, either to the end of a particular log backup or to a point in time within a particular log backup, there must exist a full unbroken chain of log records, from the first log backup taken after a full (or differential backup), right up to the point of failure. This is known as the log chain.
There are many ways to break the log chain, and if you do it means that you will only be able to recover the database to the time of the log backup taken before the event occurred that broke the chain. In short, breaking the chain is not a good idea if you care about the ability to restore your data. Two of the most common ways to break the chain include:
  • Loss or corruption of a transaction log backup file – you will only be able to recover to the last preceding good log backup. The log chain will start again at the next good full or differential backup.
  • Switch to SIMPLE recovery mode – if you ever switch from FULL to SIMPLE recovery mode, this will break the log chain as a checkpoint will be instigated and the transaction log can be immediately truncated. When and if you return to FULL mode, you will need to take another full backup to restart the log chain. In fact, until you take that full backup, the database will remain in auto-truncate mode and you won't be able to back up the log file.
Pre-SQL Server 2008, there were a couple of command, namely BACKUP LOG WITH NO_LOG or BACKUP LOG WITH TRUNCATE_ONLY (they are functionally equivalent) that, when issued, would force a log file truncation and so break the log chain. You should not issue these commands in any version of SQL Server, but I mention them here as they do still get used by the unwary, when trying to deal with a "runaway log file", without understanding the implications it has for their ability to restore their database. See Level 8 – Help, my log is full, for more details.

Tail Log Backups

As long as you have a recent full backup and a complete log chain, you can recover your database to the state in which it existed at the end of the final log backup before any failure. However, suppose that you take transaction log backups hourly, on the hour, and a failure occurs at 1:45PM. You could potentially lose 45 minutes worth of data; and indeed, if the failure is so catastrophic that the live transaction log is irretrievable, then that is the amount of data you will lose.
However, sometimes the live transaction log can still be available even if the data files are not, especially if the transaction log is contained on a separate, dedicated drive. If this is the case, you should back up the live transaction log i.e. perform a final backup of the log records generated since the last log backup. This will capture the remaining log records in the live log file, up to the point of failure. This is termed a tail log backup and is the last action that should be performed before beginning the restore and recovery operations.

Tail log backups and minimally-logged operations

If the data files are unavailable as a result of the database failure, and the tail of the log contains minimally logged operations, then it will not be possible to do a tail log backup, as this would require access to the changed data extents in the data file. This will be covered in more detail in Level 6, Managing the Transaction Log in Bulk Logged Mode.
If the database you wish to restore is online, then the tail of the log is backed up as follows:
BACKUP LOG DatabaseNameTO DISK ='FileLocation\DatabaseName_Log.bak'WITH NORECOVERY
The NORECOVERY option puts the database in a restoring state and assumes that the next action you wish to perform is a RESTORE. If the database is offline and won't start, you should still attempt to back up the tail of the log as just described (although the NORECOVERY option can be omitted, since no transactions will be in progress).
If you are sure that the log file is damaged, the documentation suggests that, as a last resort, you try to do a tail log backup with:
BACKUP LOG DatabaseNameTO DISK ='FileLocation\DatabaseName_Log.bak'WITH CONTINUE_AFTER_ERROR
If the master database and data files are damaged, but the logs are available, Microsoft recommends rebuilding the master database and then backing up the last active log. However, these topics are outside the scope of this Stairway, and I refer you to the documentation for further details. See http://msdn.microsoft.com/en-us/library/ms190952.aspx.

Performing Restore and Recovery

Having performed a tail log backup, if possible, the next step is to restore the last full backup (followed by differential backup, if appropriate), then restore the complete sequence of log backup files, including the tail log backup. The basic syntax for this sequence of restore operations is as follows:
RESTORE {DATABASE | LOG} DatabaseNameFROM DISK ='FileLocation\FileName.bak'WITH NORECOVERY;
If when restoring you omit the WITH NORECOVERY option, then by default the RESTORE command will proceed WITH RECOVERY. In other words, SQL Server will attempt to reconcile the data and log files, rolling forward completed transactions and then rolling back uncompleted transactions as necessary. By specifying WITH NORECOVERY, we are instructing SQL Server that we are entering a restore sequence and that more operations must be rolled forward, before any rollback can be performed. After restoring the last backup in the restore sequence, the database can then be recovered as follows:
RESTORE DATABASE DatabaseName WITH RECOVERY
A common requirement is to restore the database to a different location, in which case you can simply move the files as part of the restores process, as described here: http://msdn.microsoft.com/en-us/library/ms190255.aspx.

Restoring after Database Failure

The following examples describe how to recover a database in response to a failure, whereby the database data files are no longer accessible.

Full Restore to Point of Failure

Assuming that the "live" transaction log can be reached after a database failure, caused perhaps by a hardware failure, then in theory it should be possible to restore and recover your database right up to the point of failure, by using the following steps:
  1. Backup the tail of the log
  2. Restore the most recent full back up (plus differential, if applicable)
  3. Restore, in turn, each of the transaction log backups that were taken after the full (or differential) backup and completed before the time of failure
  4. Restore the tail log backup
  5. Recover the database
Many of the examples found on Books Online demonstrate restore and recovery from a "backup set", in other words a single "device" where all backups are stored. In practical terms, this means that, when backing up to disk, the backup device is a single .bak file located somewhere on that disk.
So, for example, the simple example shown in Listing 5.2 uses a backup set consisting of one full backup and one transaction log backup, and shows how to perform a full restore. In order to run this code, you'll first need to recreate the TestDB database and then insert a few sample rows of data (for convenience, the script to do this, CreateAndPopulateTestDB.sql, is included with the code download for this Level). You'll also need to create a "Backups" directory on the local C: drive of your database server, or modify the file paths as appropriate.
-- Perform a full backup of the Test database
-- The WITH FORMAT option starts a new backup set
-- Be careful, as it will overwrite any existing sets
-- The full backup becomes the first file in the set
BACKUP DATABASE TestDBTO DISK = 'C:\Backups\TestDB.bak'
WITH FORMAT;
GO
-- Perform a transaction log backup of the Test database
-- This is the second file in the set
BACKUP Log TestDBTO DISK = 'C:\Backups\TestDB.bak'
GO
-- ....<FAILURE OCCURS HERE>....
-- The RESTORE HEADERONLY command is optional.
-- It simply confirms the files that comprise 
-- the current set
RESTORE HEADERONLYFROM DISK = 'C:\Backups\TestDB.bak'
GO
-- Back up the tail of the log to prepare for restore
-- This will become the third file of the bakup set
BACKUP Log TestDBTO DISK = 'C:\Backups\TestDB.bak'
WITH NORECOVERY;
GO
-- Restore the full backup
RESTORE DATABASE TestDBFROM DISK = 'C:\Backups\TestDB.bak'
WITH FILE=1, NORECOVERY;
-- Apply the transaction log backup
RESTORE LOG TestDBFROM DISK = 'C:\Backups\TestDB.bak'
WITH FILE=2, NORECOVERY;
-- Apply the tail log backup
RESTORE LOG TestDBFROM DISK = 'C:\Backups\TestDB.bak'
WITH FILE=3, NORECOVERY;
-- Recover the database
RESTORE DATABASE TestDBWITH RECOVERY;
GO
Listing 5.2: Backing up to, and restoring from, a backup set; not recommended
However, using backup sets seems to be a relic from times when database were backed up to tape. When backing up to disk, it is a bad idea to use this scheme because, of course, the backup file will quickly grow very large.
In practice, it is far more common that each full backup and transaction log backup file will be individually named, and probably stamped with the time and date that the backup was taken. For example, most third party backup tools, popular community-generated scripts, plus the maintenance plan wizard /designer in SSMS, will all create separate date-stamped files e.g. AdventureWorks_FULL_20080904_000001.bak.
As such, a more common backup and restore scheme would use uniquely-named backups, as shown in Listing 5.3.
USE master;
BACKUP DATABASE TestDBTO DISK ='C:\Backups\TestDB.bak'
WITH INIT;
GO
-- Perform a transaction log backup of the Test database
BACKUP Log TestDBTO DISK ='C:\Backups\TestDB_log.bak'
WITH INIT;
GO
-- ....<FAILURE OCCURS HERE>....
-- Back up the tail of the log to prepare for restore
BACKUP Log TestDBTO DISK ='C:\Backups\TestDB_taillog.bak'
WITH NORECOVERY, INIT;
GO
-- Restore the full backup
RESTORE DATABASE TestDBFROM DISK = 'C:\Backups\TestDB.bak'
WITH NORECOVERY;
-- Apply the transaction log backup
RESTORE LOG TestDBFROM DISK = 'C:\Backups\TestDB_log.bak'
WITH NORECOVERY;
-- Apply the tail log backup
RESTORE LOG TestDBFROM DISK = 'C:\Backups\TestDB_taillog.bak'
WITH NORECOVERY;
-- Recover the database
RESTORE DATABASE TestDBWITH RECOVERY;
GO
Listing 5.3: Backing up to, and restoring from, uniquely-named backup files

Point in time Restore to Last Good Log Backup

Sometimes, unfortunately, it may not be possible to perform a full restore; for example if the live transaction log is unavailable as a result of the failure. In this case, we will need to restore just to the end of the most recent log backup. It is the need to prepare for this eventuality i.e. a failure of the drive containing the transaction log, which dictates how often log backups are taken. If you take backups every 15 minutes, then you exposed to the risk of 15 minutes data loss.
Imagine that we had performed the sequence of backups shown in Listing 5.4. For the sake of this demo, we are overwriting previous backup files, and the backup sequence is obviously much shortened than it would be in reality.
-- FULL BACKUP at 2AM
USE master ;
BACKUP DATABASE TestDBTO DISK = 'C:\Backups\TestDB.bak'
WITH INIT ;
GO
-- LOG BACKUP 1 at 2.15 AM
USE master ;
BACKUP LOG TestDBTO DISK = 'C:\Backups\TestDB_log.bak'
WITH INIT ;
GO
-- LOG BACKUP 2 at 2.30 AM
USE master ;
BACKUP LOG TestDBTO DISK = 'C:\Backups\TestDB_log2.bak'
WITH INIT ;
GO
Listing 5.4: A short series of log backups
If a catastrophic failure occurred shortly after 2:30 AM, we may need to restore the database to the state it existed at the end of log backup 2, at 2:30 AM.
The restore sequence in such an example is very similar to that which we saw earlier, in Listing 5.3, but since a tail backup is not possible and we'll only be able to restore to a certain point, we need to use the STOPAT option, as shown in Listing 5.5.
--RESTORE Full backup
RESTORE DATABASE TestDBFROM DISK = 'C:\Backups\TestDB.bak'
WITH NORECOVERY;
--RESTORE Log file 1
RESTORE LOG TestDBFROM DISK = 'C:\Backups\TestDB_log.bak'
WITH NORECOVERY, STOPAT = 'Jan 01, 2020 12:00 AM';
--RESTORE Log file 2
RESTORE LOG TestDBFROM DISK = 'C:\Backups\TestDB_Log2.bak'
WITH NORECOVERY, STOPAT = 'Jan 01, 2020 12:00 AM';
--Recover the database
RESTORE DATABASE TestDBWITH RECOVERY;
GO
Listing 5.5: Restoring to a point in time, using STOPAT
Since we've specified a STOPAT time in the future, this code will roll forward all completed transactions up to the end of the second transaction log.
Alternatively, it's possible to specify a STOPAT time that falls within the time range of the transactions recorded in a specific log file. In this case, the database will be restored up to the last committed transaction at the time specified. This is useful when you know what time you want to restore to, but don't know exactly what log backup contains that time.
It is also possible to restore to a particular marked transaction. This is useful when, for example, you need to restore multiple databases, accessed by a certain application, to a logically consistent point. This topic is not discussed further here, but you can find out more on Books Online (http://msdn.microsoft.com/en-us/library/ms187014.aspx), and Mladen Prajdic provides a good worked example here: http://weblogs.sqlteam.com/mladenp/archive/2010/10/20/sql-server-transaction-marks-restoring-multiple-databases-to-a-common.aspx.

Restoring after a "Bad Transaction"

Outside of the context of any database failure, it may be necessary to restore a database backup, plus transaction logs, in order to return a database to a particular point in time just before an erroneous data modification was made, such dropping or truncating a table.
Your response to such a situation would depend on the nature of the problem. If possible, you might disconnect all users from the database (after notifying them), and assess the implications of what just happened. In some cases, you might need to estimate the time the problem occurred and then do a full recovery of the database and logs using a point in time restore. Once the restore was done, you'd have to notify users that some transactions may have been lost, and ask for forgiveness.
Of course, often you will not be able to interrupt normal business operation in this manner, to fix an accidental data loss. Since the live database is still up and running and being accessed, you could try restoring a backup of the database in STANDBY mode. This allows further log backups to be restored but unlike when using NORECOVERY, the database is still readable. The restore scheme might look something like this:
  1. Restore a backup of the database, in STANDBY mode, alongside the live database
  2. Roll the logs forward to the point just before the bad transaction occurred, and data was lost.
  3. Copy the lost data across to the live database and drop the restored copy
Of course, this process is not necessarily straightforward, and can be quite time-consuming. Unless you've purchased a specialized log reading tool, and can interrogate the log backup directly, rolling the logs forward can mean a series of painstaking steps involving restoring a log, checking the data, restoring a bit further, and so on, until you've worked out exactly where the bad transaction occurred. Step 3 can be difficult too, since you will be introducing data into the live system that is not necessarily consistent with the current state of the database, so there could be referential integrity issues.
Let's take a look at an example that implements steps 1 and 2 above. First, let's start again from scratch by running the CreateAndPopulateTestDB.sql script to recreate the TestDB database, and insert 10 rows of test data into a new LogTest table. In Listing 5.6, we simply do a full database backup (overwriting any previous backup file). You'll need to create the "Backups" directory, if you've not done so already, or adjust the path as appropriate.
-- full backup of the database
BACKUP DATABASE TestDBTO DISK ='C:\Backups\TestDB.bak'
WITH INIT;
GO
Listing 5.6: Full backup of TestDB
We then insert one new row of data into the LogTest table.
USE TestDB
GOINSERT INTO [TestDB].[dbo].[LogTest]
           ([SomeInt]
           ,[SomeLetters2])
     VALUES
           (66666,
           'ST')
           SELECT * FROM dbo.LogTest
Listing 5.7: Inserting an 11th row into TestDB
So now we have a live TestDB database with 11 rows in the LogTest table, and a backed up version with 10 rows. Let's now capture additional modification in a log backup, as shown in Listing 5.8.
USE master
GOBACKUP Log TestDBTO DISK ='C:\Backups\TestDB_log.bak'
WITH INIT;
GO
Listing 5.8: A log backup of TestDB
Now, we're going to simulate an erroneous "bad transaction", simply by dropping the LogTest table, after which we do a final log backup.
USE TestDB
GODROP TABLE dbo.LogTest ;
USE master
GOBACKUP Log TestDBTO DISK ='C:\Backups\TestDB_log2.bak'
WITH INIT;
GO
Listing 5.9: Disaster!
In order to try to retrieve the lost data, without interrupting normal business operation, we're going to restore a copy of the TestDB database in STANDBY mode. The data and log files for the Standby database, called ANewTestDB, are moved to a "Standby" directory (you'll need to create this directory beforehand).
-- restore a copy of the TestDB database, called
-- ANewTestDB, in STANDBY mode
USE master ;
GORESTORE DATABASE ANewTestDB
   FROM DISK ='C:\Backups\TestDB.bak'
   WITH STANDBY='C:\Backups\ANEWTestDB.bak',
   MOVE 'TestDB_dat' TO 'C:\Standby\ANewTestDB.mdf', 
   MOVE 'TestDB_log' TO 'C:\Standby\ANewTestDB.ldf'
GO
Listing 5.10: Restore a copy of TestDB in STANDBY mode
We now have a new database, called ANewTestDB, and it's in "Standby / Read-Only" mode, as shown in Figure 5.1.

Figure 5.1: The Standby database
A query against the LogTest table in the ANewTestDB database will reveal 10 rows. However, we'd like to get the table back into the state it was in just prior to it being erroneously dropped. Therefore, the next step is to perform restore a log backup to the standby database.
USE master
GORESTORE LOG ANewTestDBFROM DISK = 'C:\Backups\TestDB_log.bak'
   WITH STANDBY='C:\Backups\ANewTestDB_log.bak'
Listing 5.11: Restore a log backup to the ANewTestDB database, in STANDBY mode
At this point, a query against ANewTestDB reveals 11 rows and we can now ready to copy that data back across into the live database. If we went a step further and restored the second log backup, we'd realize we'd gone too far and the table would be missing in the standby database as well.
An alternative to doing a Standby restore is to consider use of a third party tool such as Red Gate's SQL Virtual Restore, which provides a way to mount backups as live, fully functional databases, without a physical restore.
Whether DBAs like it or not, developers often do have access to production databases to perform ad-hoc data loads and changes. It is the joint responsibility of the DBA and developer to make sure these proceed smoothly, and so not cause issues that require the sort of action just described. We return to this topic later in the Level 6 - Dealing with Bulk Operations.
Of course, the exact nature of the reparative action required depends on the nature of the bad transaction. If a table was "accidentally dropped" then it's likely you'll be heading down the RESTORE WITH STANDBY route. At other times, you may get away with simply creating a script to "reverse out" the rogue modifications.
If the damage only affected a single column or a limited number of rows, then it may be possible, as an alternative, to use a tool such as SQL Data Compare, which can compare directly to backup files, and can do row-level restores.
Alternatively, if you run SQL Server 2005 (or later) Enterprise Edition, and have available a recent database snapshot, you may be able to run a query against the snapshot to retrieve the data as it looked at the time the database snapshot was taken, and then write an UPDATE or INSERT command to pull the data from the database snapshot into the live, source database.
Finally, as a last resort, a specialized log reader tool may help you reverse out the effects of a transaction although I'm not aware of any that work reliably in SQL Server 2005 and later.

Summary

In this Level, we've covered the basics of backing up and restoring log files for databases operating in FULL recovery mode, which will be the norm for many production databases.
For most DBAs, the need to perform a point-in-time restore is a rare event, but it's one of those tasks where, if it is necessary, it is absolutely critical that it is done and done well; the DBA's reputation depends on it.
In the case of corruption, drive failure, and so on, point-in-time recovery might involve, if you're lucky, backing up the tail of the transaction log and restoring right to point of failure. If the transaction log is not available, or if you're restoring in order to revert to some point in time before a "bad transaction" occurred, then the situation becomes trickier, but hopefully some of the techniques covered in this step will help.

Saturday, July 16, 2011

Moving System Databases - SQL Server 2005

 

There are lots of articles, Links and sites that describe how to move system databases from the SQL default location to the location described by the DBA but still lot of users find difficult while moving them and land up in trouble. So I thought of wiriting an article for moving the databases so that the users can use the same and enjoy the task.

Tasks for moving system databases:

1.Moving tempdb databases.

a.) Execute the script below.

USE master;
GO
alter database tempdb MODIFY FILE (NAME = tempdev,FILENAME='NEW PATH');
GO
alter database tempdb MODIFY FILE (NAME = templog,FILENAME='NEW PATH');
GO

b.) Restart services.
c.) Confirm path of database files.

2. Moving model and msdb databases.

a.) Execute the script below.

USE master;
GO
alter database msdb MODIFY FILE (NAME = MSDBData,FILENAME='NEW PATH');
go
alter database msdb MODIFY FILE (NAME = MSDBLog,FILENAME='NEW PATH');
go

USE master;
GO
alter database model MODIFY FILE (NAME = modeldev,FILENAME='NEW PATH');
go
alter database model MODIFY FILE (NAME = modellog,FILENAME='NEW PATH');
go

b.) Stop services
c.) Copy the files to the new location
d.) Restart services.
e.) Confirm path of database files.

3.) Moving master database:

a.) Edit the startup parameters to reflect the new path for –d, –l and -e parameters.
b.) Stop the services.
c.) Move the master and resource database files to the new location
d.) Change the sql port to different one than usual to avoid excess connections and create an alias with that port
e.) Start the services using NET START MSSQLSERVER /f /T3608 (*MSSQLSERVER is for default instance, if you have installed named instance then you need to use NET START MSSQL$Instancename /f /T3608)
f.) Execute the script given below from sqlcmd

USE master;
GO
alter database mssqlsystemresource MODIFY FILE (NAME = data,FILENAME='NEW PATH\mssqlsystemresource.mdf');
go
alter database mssqlsystemresource MODIFY FILE (NAME = log,FILENAME='NEW PATH\mssqlsystemresource.ldf');
go
alter database mssqlsystemresource set READ_ONLY;
go

g.) Stop the services
h.) Change the port back to 1433
i.) Remove the alias
j.) Start sql services.
k.) Confirm if the data files and log files reside on desired path.

SQL Server Service Startup Options

When you install SQL Server, Setup writes a set of default startup options in the Microsoft Windows registry. You can use these startup options to specify an alternate master database file, master database log file, or error log file.
Startup options can be set by using SQL Server Configuration Manager. For information, see How to: Configure Server Startup Options (SQL Server Configuration Manager).
Default startup optionsDescription
-d master_file_pathThe fully qualified path for the master database file (typically, C:\Program Files\Microsoft SQL Server\MSSQL.n\MSSQL\Data\master.mdf). If you do not provide this option, the existing registry parameters are used.
-e error_log_pathThe fully qualified path for the error log file (typically, C:\Program Files\Microsoft SQL Server\MSSQL.n\MSSQL\LOG\ERRORLOG). If you do not provide this option, the existing registry parameters are used.
-l master_log_pathThe fully qualified path for the master database log file (typically C:\Program Files\Microsoft SQL Server\MSSQL.n\MSSQL\Data\mastlog.ldf). If you do not specify this option, the existing registry parameters are used.
You can override the default startup options temporarily and start an instance of SQL Server by using the following additional startup options.
Other startup optionsDescription
-cShortens startup time when starting SQL Server from the command prompt. Typically, the SQL Server Database Engine starts as a service by calling the Service Control Manager. Because the SQL Server Database Engine does not start as a service when starting from the command prompt, use -c to skip this step.
-fStarts an instance of SQL Server with minimal configuration. This is useful if the setting of a configuration value (for example, over-committing memory) has prevented the server from starting. Starting SQL Server in minimal configuration mode places SQL Server in single-user mode. For more information, see the description for -m that follows.
-g memory_to_reserveSpecifies an integer number of megabytes (MB) of memory that SQL Server will leave available for memory allocations within the SQL Server process, but outside the SQL Server memory pool. The memory outside of the memory pool is the area used by SQL Server for loading items such as extended procedure .dll files, the OLE DB providers referenced by distributed queries, and automation objects referenced in Transact-SQL statements. The default is 256 MB.
Use of this option might help tune memory allocation, but only when physical memory exceeds the configured limit set by the operating system on virtual memory available to applications. Use of this option might be appropriate in large memory configurations in which the memory usage requirements of SQL Server are atypical and the virtual address space of the SQL Server process is totally in use. Incorrect use of this option can lead to conditions under which an instance of SQL Server may not start or may encounter run-time errors.
Use the default for the -g parameter unless you see any of the following warnings in the SQL Server error log:
  • "Failed Virtual Allocate Bytes: FAIL_VIRTUAL_RESERVE <size>"
  • "Failed Virtual Allocate Bytes: FAIL_VIRTUAL_COMMIT <size>"
These messages might indicate that SQL Server is trying to free parts of the SQL Server memory pool in order to find space for items such as extended stored procedure .dll files or automation objects. In this case, consider increasing the amount of memory reserved by the -g switch.
Using a value lower than the default will increase the amount of memory available to the buffer pool and thread stacks; this may, in turn, provide some performance benefit to memory-intensive workloads in systems that do not use many extended stored procedures, distributed queries, or automation objects.
-hReserves virtual address space for Hot Add memory metadata when AWE is enabled with 32-bit SQL Server. Required for Hot-Add memory with 32-bit AWE, but consumes about 500 MB of virtual address space and makes memory tuning more difficult. Not required for 64-bit SQL Server. Hot Add Memory is only available for Windows Server 2003, Enterprise and Datacenter editions. It also requires special hardware support from the hardware vendor.
-mStarts an instance of SQL Server in single-user mode. When you start an instance of SQL Server in single-user mode, only a single user can connect, and the CHECKPOINT process is not started. CHECKPOINT guarantees that completed transactions are regularly written from the disk cache to the database device. (Typically, this option is used if you experience problems with system databases that should be repaired.) Enables the sp_configure allow updates option. By default, allow updates is disabled. Starting SQL Server in single-user mode enables any member of the computer's local Administrators group to connect to the instance of SQL Server as a member of the sysadmin fixed server role. For more information, see Troubleshooting: Connecting to SQL Server When System Administrators Are Locked Out. For more information about single-user mode, see Starting SQL Server in Single-User Mode.
-m"Client Application Name"When you use the -m option with SQLCMD or SQL Server Management Studio, you can limit the connections to a specified client application. For example, -m"SQLCMD" limits connections to a single connection and that connection must identify itself as the SQLCMD client program. Use this option when you are starting SQL Server in single-user mode and an unknown client application is taking the only available connection. To connect through the Query Editor in Management Studio, use -m"Microsoft SQL Server Management Studio - Query".
Client Application Name is case sensitive.
Important noteImportant
Do not use this option as a security feature. The client application provides the client application name, and can provide a false name as part of the connection string.
-nDoes not use the Windows application log to record SQL Server events. If you start an instance of SQL Server with -n, we recommend that you also use the -e startup option. Otherwise, SQL Server events are not logged.
-sAllows you to start a named instance of SQL Server. Without the -s parameter set, the default instance will try to start. You must switch to the appropriate BINN directory for the instance at a command prompt before starting sqlservr.exe. For example, if Instance1 were to use \mssql$Instance1 for its binaries, the user must be in the \mssql$Instance1\binn directory to start sqlservr.exe -s instance1.
-T trace#Indicates that an instance of SQL Server should be started with a specified trace flag (trace#) in effect. Trace flags are used to start the server with nonstandard behavior. For more information, see Trace Flags (Transact-SQL).
-xDisables the following monitoring features:
  • SQL Server performance monitor counters
  • Keeping CPU time and cache-hit ratio statistics
  • Collecting information for the DBCC SQLPERF command
  • Collecting information for some dynamic management views
  • Many extended-events event points
Caution noteCaution
When you use the –x startup option, the information that is available for you to diagnose performance and functional problems with SQL Server is greatly reduced.
-EIncreases the number of extents that are allocated for each file in a filegroup. This option may be helpful for data warehouse applications that have a limited number of users running index or data scans. It should not be used in other applications because it might adversely affect performance. This option is not supported in 32-bit releases of SQL Server.
NoteNote
When specifying a trace flag with the -T option, use an uppercase "T" to pass the trace flag number. A lowercase "t" is accepted by SQL Server, but this sets other internal trace flags that are required only by SQL Server support engineers. (Parameters specified in the Control Panel startup window are not read.)
Using Startup Options for Troubleshooting

Some startup options such as single-user mode and minimal configuration mode are principally used during troubleshooting. Starting the server for troubleshooting with the –m or –f options is most easily done at the command line, while manually starting sqlservr.exe.
Note Note
When SQL Server is started by using net start, startup options use a slash (/) instead of a hyphen (-).
Using Startup Options During Normal Operations

You may want to use some startup options every time you start SQL Server. These options, such as –g or starting with a trace flag, are most easily done by configuring the startup parameters by using SQL Server Management Studio or SQL Server Configuration Manager. These tools save the startup options as registry keys, enabling SQL Server to always start with the startup options

AWE

Introduction

Recently, 4GB of physical RAM was added to a SQL Server 2000 Enterprise edition instance I support. This brought the total physical RAM available on the machine up to 8GB. By using Windows 2000 Address Windowing Extensions (AWE), with SQL Server 2000 Enterprise or Developer Edition, on Windows 2000 Advanced Server or Windows 2000 Data Center, SQL Server can take advantage of physical memory exceeding 4GB of physical RAM.

Although I had read a few articles about the AWE configuration process, this was my first time I had ever actually enabled this feature. After I completed the configuration, I discovered a few behaviors I had not read about, as well as configurations that could have caused issues had they not been addressed. In this article I will detail how I enabled the AWE functionality, as well as what behaviors I believe one should be aware of.

This scope of this article details the configuration of AWE for SQL Server 2000 Enterprise on a Windows 2000 Advanced Server machine. Configuring AWE on Windows 2000 Data Center, I’m assuming, is quite similar to configuring it on Windows 2000 Advanced Server, but as I have not performed such an operation, I will not address it here. Also, this article assumes you are using a single instance machine. Multiple instances and AWE settings require special planning not discussed here.

Why use AWE?

Prior to adding the additional 4GB of memory, the application running against this particular SQL Server instance was experiencing significant I/O spikes throughout the day, and was running under maximum memory conditions. The buffer cache and procedure cache utilization was always 100%, with the procedure cache often being starved for memory.

After adding the additional memory, and enabling AWE, I saw the I/O spikes decrease significantly. The extra memory allowed both the buffer cache and procedure cache to grab a sufficient amount of memory needed for the application queries (I’ll be writing another article describing how you can monitor such information). The bigger buffer decreased the number of times that reads and write operations needed to read from disk.

Keep in mind that extra memory will not solve all I/O and memory issues. The performance outcome after configuring AWE will vary depending on your application activity, read/write ratio, network throughput, hardware components (CPU, RAID settings), and database size.


Physical Address Extension (PAE): is a processor feature that enables x86 processors to access more than 4 GB of physical memory on capable versions of Windows. Certain 32-bit versions of Windows Server running on x86-based systems can use PAE to access up to 64 GB or 128 GB of physical memory, depending on the physical address size of the processor.

Lock Pages in Memory :-

This policy determines which accounts can use a process to keep data in physical memory, preventing the system from paging the data to virtual memory on disk. The Lock Pages in Memory option is set to OFF by default in SQL Server. If you have system administrator permissions, you can enable the option manually by using the Windows Group Policy tool (gpedit.msc) and assign this permission to the account that SQL Server is running.

Although it is not required, we recommend locking pages in memory when using 64-bit operating systems. For 32-bit operating systems, Lock pages in memory permission must be granted before AWE is configured for SQL Server.

Configuration Steps

1. Assuming 8GB of physical memory, after adding the extra RAM, and prior to rebooting the server, your boot.ini should contain both the “/3GB /PAE” switches. Not having /3GB in your boot.ini will translate to 2GB of RAM reserved for the operating system, instead of 1GB remaining free with the “/3GB” switch. The “/PAE” switch is required if you want SQL Server to support more than 4GB of RAM.

2. Make sure that the SQL Server service account has been granted “Lock Pages in Memory”) privileges. Just because your service account is a member of the administrators group does NOT mean that it has this policy setting already. I configured this setting by selecting Start | Run | and typing gpedit.msc. I selected OK to launch the Group Policy editor. I expanded Computer Configuration | expanded Windows Settings, Security Settings, Local Policies, and then clicked User Rights Assignments. In the Policy pane (on the right), I double clicked “Lock pages in memory”, and added the SQL Server service account used to run the SQL Server service. For Domain member machines, be sure that no security policies at the site, domain, or organization unit overwrite your policy change. Also, the policy change does not affect permissions of the service account until the SQL Server service is restarted. But do not restart the service yet!

3. In Query Analyzer, connected as sysadmin for your SQL Server instance. Enable AWE by executing the following script:

sp_configure ‘show advanced options’, 1
RECONFIGURE
GO

sp_configure ‘awe enabled’, 1
RECONFIGURE
GO

This setting does not take effect until you restart the SQL Server instance – but do not do it yet – there is more!

4. Once AWE is enabled, SQL Server will no longer dynamically manage memory. SQL Server will grab all available physical memory, leaving 128MB or less for the OS and other applications to use. This underscores the importance of setting an max server memory amount that SQL Server should be allowed to consume. Determine this upper limit based on memory consumption of other applications on your server. Also note that a lower limit (min server memory) is no longer relevant in the context of AWE.

In this example, to enable 7GB as the maximum SQL Server memory allowed to be consumed, issue the following command:

sp_configure ‘max server memory’, 7168
RECONFIGURE
GO

sp_configure ‘show advanced options’, 0
RECONFIGURE
GO

5. NOW reboot your machine (assuming you have not rebooted since reconfiguring the boot.ini file). IF you have already rebooted after configuring the boot.ini file, you need only restart the SQL Server instance.

6. After the restart, check the SQL Log in Enterprise Manager right away. The most recent startup log should contain the words “Address Windowing Extensions enabled” early in the log. If you didn’t do it right, the log should say, “Cannot use Address Windowing Extensions because…”. The reasons for this message will be noted, such as not assigning “lock pages in memory”.

After the configuration

AWE awareness is not built in to all Windows 2000 tools, so here are a few areas you should be aware of when monitoring memory utilization…
The Windows Task Manager’s “Processes” tab tells a misleading tale about how much memory the SQLSERVR.EXE process is using. I was alarmed to see that after a few hours, the process was still just consuming 118MBs, versus the maximum 6.5GB I configured it for. For a reality check, within the Windows Task Manager, switch to the Performance tab and check out the Available physical memory. This amount should be the total memory available less the maximum amount you set for SQL Server, along with other applications running on your instance.

If you use Performance Monitor (System Monitor), keep in mind that for the SQLServer:Memory Manager object, that Target Server Memory (KB) and Total Server Memory (KB) counters will display the same number. This is because with AWE, SQL Server no longer dynamically manages the size of the memory used. It will consume the value of your ‘max server memory’. This memory will be made up of the physical RAM only, not the paging file.

- AWE memory can be monitored in Performance Monitor (System Monitor) using the Performance object “SQLServer:Buffer Manager” several AWE related counters.

One last note on memory configuration… If you have not left enough RAM for other processes on the machine, do consider lowering the max server memory setting. Keep in mind that this change will not take effect until you restart the SQL Server service.

How To Rebuild System Databases In SQL 2005

We need to rebuild the system databases if the master database is corrupted or damaged. Let us discuss in detail how to rebuild system databases in SQL server 2005.

Step 1: Take a full backup of all the System and User databases prior to rebuilding the system databases as we need to restore it later to avoid data loss.

Step 2: Copy the SQL 2005 setup files from the CD to the hard disk. In the command prompt, navigate to the folder which contains the setup.exe file. In my case it is available in D:\Setups\SQL 2005\Servers folder. The SQL Server we are going to rebuild is currently running. Now type the below command,

start /wait setup.exe /qn INSTANCENAME=”MSSQLSERVER” REINSTALL=SQL_Engine REBUILDDATABASE=1 SAPWD=”XXXX”

where XXXX is the name of the password.

INSTANCENAME=”MSSQLSERVER” for default instance of SQL 2005 and
INSTANCENAME=”MSSQL$INSTANCENAME” for named instance of SQL 2005.

For example,
If you have a named instance named as “Deepak\Test” then type as below,
INSTANCENAME=”MSSQL$TEST” in the above command

Refer the below screenshot for the same.



Step 3: After executing the command in step 2 the rebuild process will start and will complete within 5 minutes. You can verify whether the databases are rebuild by navigating to folder containing the data and log files for the system databases. If you arrange them using modified date it will clearly show the time when it was last modified and it is the time when we executed the command in Step 2.



Step 4: Once the rebuild is completed, connect to the SQL server using SSMS. In the object explorer only the system databases will be available.
If any user db were present prior to rebuild it will be lost and we need to perform as below to retrieve it.
Restore from the backup taken in Step 1 (or)
We can attach from the data and log files of the user db as they will not be cleaned up during rebuild process.

NOTE : No Need to detach all the user databases before rebuild as the ldf and mdf files will be present in the same path as usual and will not be overwritten.

Now we need to restore the system databases from the backup which we took in Step 1.
Master database can be restored only in Single user mode (refer Step 5) and other dbs can be restored normally.

NOTE : The ldf and mdf files of the system databases will be overwritten and hence we cannot perform detach/ attach operation.

Step 5: In order to restore master database perform the below steps,


  • Stop SQL server and start it using the below command from the command prompt
  • NET START MSSQLSERVER /c /m which will start SQL in single user mode
  • Type as shown in the below screenshot. Once the restore is completed SQL server will shut down and hence we need to start it normally and access the databases.