Friday, November 18, 2011

Log Shipping VS Mirroring

With Log Shipping:


Data Transfer:    T-Logs are backed up and transferred to secondary server
Transactional Consistency:  All committed and un-committed are transferred
Server Limitation:   Can be applied to multiple stand-by servers
Failover:   Manual
Failover Duration:  Can take more than 30 mins
Role Change:   Role change is manual
Client Re-direction:  Manual changes required

With Database Mirroring:

Data Transfer:    Individual T-Log records are transferred using TCP endpoints
Transactional Consistency:  Only committed transactions are transferred
Server Limitation:   Can be applied to only one mirror server
Failover:   Automatic
Failover Duration:  Failover is fast, sometimes < 3 seconds but not more than 10 seconds
Role Change:   Role change is fully automatic
Client Re-direction:  Fully automatic as it uses .NET 2.0

Thursday, November 10, 2011

Top 10 SQL Server 2008 Features


Microsoft SQL Server 2008 provides a number of enhancements and new functionality, building on previous versions. Administration, database maintenance, manageability, availability, security, and performance, among others, all fall into the roles and responsibilities of the database administrator. This article provides the top ten new features of SQL Server 2008 (referenced in alphabetical order) that can help DBAs fulfill their responsibilities. In addition to a brief description of each feature, we include how this feature can help and some important use considerations.
1
Activity Monitor
When troubleshooting a performance issue or monitoring a server in real time, it is common for the DBA to execute a number of scripts or check a number of sources to collect general information about what processes are executing and where the problem may be. SQL Server 2008 Activity Monitor consolidates this information by detailing running and recently executed processes, graphically. The display gives the DBA a high-level view and the ability to drill down on processes and view wait statistics to help understand and resolve problems.
To open up Activity Monitor, just right-click on the registered server name in Object Explorer and then click Activity Monitor, or utilize the standard toolbar icon in SQL Server Management Studio. Activity Monitor provides the DBA with an overview section producing output similar to Windows Task Manager and drilldown components to look at specific processes, resource waits, data file I/Os, and recent expensive queries, as noted in Figure 1.
Figure 1: Display of SQL Server 2008 Activity Monitor view from Management Studio
 image
NOTE: There is a refresh interval setting accessed by right-clicking on the Activity Monitor. Setting this value to a low threshold, under 10 seconds, in a high-volume production system can impact overall system performance.
DBAs can also use Activity Monitor to perform the following tasks:
  • Pause and resume Activity Monitor with a simple right-click. This can help the DBA to “save” a particular point-in-time for further investigation without it being refreshed or overwritten. However, be careful, because if you manually refresh, expand, or collapse a section, the data will be refreshed.
  • Right-click a line item to display the full query text or graphical execution plan via Recent Expensive Queries.
  • Execute a Profiler trace or kill a process from the Processes view. Profiler events include RPC:Completed,SQL:BatchStarting, and SQL:BatchCompleted events, and Audit Login and Audit Logout.
Activity Monitor also provides the ability to monitor activity on any SQL Server 2005 instance, local or remote, registered in SQL Server Management Studio.

2
[SQL Server] Audit
Having the ability to monitor and log events, such as who is accessing objects, what changes occurred, and what time changes occurred, can help the DBA to meet compliance standards for regulatory or organizational security requirements. Gaining insight into the events occurring within their environment can also help the DBA in creating a risk mitigation plan to keep the environment secure.
Within SQL Server 2008 (Enterprise and Developer editions only), SQL Server Audit provides automation that allows the DBA and others to enable, store, and view audits on various server and database components. The feature allows for auditing at a granularity of the server and/or database level.
There are server-level audit action groups, such as:
  • FAILED_LOGIN_GROUP, which tracks failed logins.
  • BACKUP_RESTORE_GROUP, which shows when a database was backed up or restored.
  • DATABASE_CHANGE_GROUP, which audits when a database is created, altered, or dropped.
Database-level audit action groups include:
  • DATABASE_OBJECT_ACCESS_GROUP, which is raised whenever a CREATE, ALTER, or DROP statement is executed on database objects.
  • DATABASE_OBJECT_PERMISSION_CHANGE_GROUP, which is raised when GRANT, REVOKE, or DENY is utilized for database objects.
There are also audit actions, such as SELECT, DELETE, or EXECUTE. For more information, including a full list of the audit groups and actions, see SQL Server Audit Action Groups and Actions.
Audit results can be sent to a file or event log (Windows Security or System) for viewing. Audit information is created utilizing Extended Events, another new SQL Server 2008 feature.
By using SQL Server 2008 audits, the DBA can now answer questions that were previously very difficult to retroactively determine, such as “Who dropped this index?”, “When was the stored procedure modified?”, “What changed which might not be allowing this user to access this table?”, or even “Who ran SELECT or UPDATE statements against the [dbo.Payroll]table?”
For more information about using SQL Server Audit and some examples of implementation, see the SQL Server 2008 Compliance Guide.

3
This feature has long been a popular request of DBAs for SQL Server. The wait is finally over, and just in time! Many factors, including increased data retention periods and the need to physically store more data have contributed to the recent explosion in database size. Backing up a large database can require a significant time window to be allotted to backup operations and a large amount of disk space allocated for use by the backup file(s).
With SQL Server 2008 backup compression, the backup file is compressed as it is written out, thereby requiring less storage, less disk I/O, and less time. In lab tests conducted with real customer data, we observed in many cases a reduction in the backup file size between 70% and 85%. Testing also revealed around a 45% reduction in the backup and restore time. It is important to note that the additional processing results in higher processor utilization. To help segregate the CPU intensive backup and minimize its effect on other processes, one might consider utilizing another feature mentioned in this paper,Resource Governor.
The compression is achieved by specifying the WITH COMPRESSION clause in the BACKUP command (for more information, see SQL Server Books Online) or by selecting it in the Options page in the Back Up Database dialog box. To prevent having to modify all existing backup scripts, there is also a global setting to enable compressing all backups taken on a server instance by default. (This setting is accessed by using the Database Settings page of the Server Propertiesdialog box or by running sp_configure with backup compression default set to 1.) While the compression option on the backup command needs to be explicitly specified, the restore command automatically detects that a backup is compressed and decompresses it during the restore operation.
Backup compression is a very useful feature that can help the DBA save space and time. For more information about tuning backup compression, see the technical note on Tuning the Performance of Backup Compression in SQL Server 2008. NOTE: Creating compressed backups is only supported in SQL Server 2008 Enterprise and Developer editions; however, every SQL Server 2008 edition allows for a compressed backup to be restored.

4
Central Management Servers
DBAs are frequently responsible for managing not one but many SQL Server instances in their environment. Having the ability to centralize the management and administration of a number of SQL Server instances from a single source can allow the DBA to save significant time and effort. The Central Management Servers implementation, which is accessed via the Registered Servers component in SQL Server Management Studio, allows the DBA to perform a number of administrative tasks on SQL Servers within the environment, from a single management console.
Central Management Servers allow the DBA to register a group of servers and apply functionality to the servers, as a group, such as:
  • Multiserver query execution: A script can now be executed from one source, across multiple SQL Servers, and be returned to that source, without the need to distinctly log into every server. This can be extremely helpful in cases where data from tables on two or more SQL Servers needs to be viewed or compared without the execution of a distributed query. Also, as long as the syntax is supported in earlier server versions, a query executed from the Query Editor in SQL Server 2008 can run against SQL Server 2005 and SQL Server 2000 instances as well. For more information, see the SQL Server Manageability Team Blog, specifically Multiple Server Query Execution in SQL Server 2008 .
  • Import and evaluate policies across servers: As part of Policy-Based Management (another new SQL Server 2008 feature discussed in this article), SQL Server 2008 provides the ability to import policy files into particular Central Management Server Groups and allows policies to be evaluated across all of the servers registered in the group
  • Control Services and bring up SQL Server Configuration Manager: Central Management Servers help provide a central place where DBAs can view service status and even change status for the services, assuming they have the appropriate permissions
  • Import and export the registered servers: Servers within Central Management Servers can be exported and imported for use between DBAs or different SQL Server Management Studio instance installations. This is an alternative to DBAs importing or exporting into their own local groupings within SQL Server Management Studio.
Be aware that permissions are enforced via Windows authentication, so a user might have different rights and permissions depending on the server registered within the Central Management Server group. For more information, see Administering Multiple Servers Using Central Management Servers and a Kimberly Tripp blog: SQL Server 2008 Central Management Servers-have you seen these?

5
Data Collector and Management Data Warehouse
Performance tuning and troubleshooting are a time-consuming tasks that can require in-depth SQL Server skills and an understanding of database internals. Windows System monitor (Perfmon), SQL Server Profiler, and dynamic management views (DMVs) helped with some of this, but they were often intrusive, laborious to use, or the dispersed data collection methods were cumbersome to easily summarize and interpret.
To provide actionable performance insight, SQL Server 2008 delivers a fully extensible performance data collection and warehouse tool also known as the data collector. The tool includes several out-of-the-box data collection agents, a centralized data repository for storing performance data called management data warehouse, and several precanned reports to present the captured data. The data collector is a scalable tool that can collect and assimilate data from multiple sources such as dynamic management views , Perfmon, Transact-SQL queries, by using a fully customizable data collection frequency. The data collector can be extended to collect data for any measurable attribute of an application.
Another helpful feature of the management data warehouse is that it can be installed on any SQL Server and then collect data from one or more SQL Server instances within the environment. This can help minimize the performance impact on production systems and improve the scalability in terms of monitoring and collecting data from a number of servers. In lab testing we observed around a 4% reduction in throughput when running the agents and the management data warehouse on a server running at capacity (via an OLTP workload). The impact can vary based on the collection interval (as the test was over an extended workload with 15-minute-pulls into the warehouse), and it can be exacerbated during intervals of data collection. Finally, some capacity should be considered, because the DCExec.exe process will take up some memory and processor resources, and writes to the management data warehouse will increase the I/O workload and space allocation needed where the data and log files are located.
The diagram (Figure 2) below depicts a typical data collector report.
Figure 2: Display of SQL Server 2008 Data Collector Report
image
This report shows SQL Server processing over the period of time data was collected. Events such as waits, CPU, I/O, memory usage, and expensive query statistics are collected and displayed. A DBA can also drill down into the reports to focus on a particular query or operation to further investigate, detect, and resolve performance problems. This data collection, storage, and reporting can allow the DBA to establish proactive monitoring of the SQL Server(s) in the environment and go back over time to understand and assess changes to performance over the time period monitored. The data collector and management data warehouse feature is supported in all editions (except SQL Server Express) of SQL Server 2008.

6
Data Compression
The ability to easily manage a database can greatly enhance the opportunity for DBAs to accomplish their regular task lists. As table, index, and file sizes grow and very large databases (VLDBs) become commonplace, the management of data and unwieldy file sizes has become a growing pain point. Also, with more data being queried, the need for large amounts of memory or the necessity to do physical I/O can place a larger burden on DBAs and their organizations. Many times this results in DBAs and organizations securing servers with more memory and/or I/O bandwidth or having to pay a performance penalty.
Data compression, introduced in SQL Server 2008, provides a resolution to help address these problems. Using this feature, a DBA can selectively compress any table, table partition, or index, resulting in a smaller on-disk footprint, smaller memory working-set size, and reduced I/O. The act of compression and decompression will impact CPU; however, this impact is in many cases offset by the gains in I/O savings. Configurations that are bottlenecked on I/O can also see an increase in performance due to compression.
In some lab tests, enabling data compression resulted in a 50-80% saving in disk space. The space savings did vary significantly with minimal savings on data that did not contain many repeating values or where the values required all the bytes allocated by the specified data type. There were also workloads that did not show any gains in performance. However, on data that contained a lot of numeric data and many repeating values, we saw significant space savings and observed performance increases from a few percentage points up to 40-60% on some sample query workloads.
SQL Server 2008 supports two types of compressions: row compression, which compresses the individual columns of a table, and page compression, which compresses data pages using row, prefix, and dictionary compression. The amount of compression achieved is highly dependent on the data types and data contained in the database. In general we have observed that using row compression results in lower overhead on the application throughput but saves less space. Page compression, on the other hand, has a higher impact on application throughput and processor utilization, but it results in much larger space savings. Page compression is a superset of row compression, implying that an object or partition of an object that is compressed using page compression also has row compression applied to it. Also, SQL Server 2008 does support the vardecimal storage format of SQL Server 2005 SP2. However, because this storage format is a subset of row compression, it is a depreciated feature and will be removed from future product versions.
Both row and page compression can be applied to a table or index in an online mode that is without any interruption to the application availability. However, a single partition of a partitioned table cannot be compressed or uncompressed online. In our testing we found that using a hybrid approach, where only the largest few tables were compressed, resulted in the best performance in terms of saving significant disk space while having a minimal negative impact on performance. Because there are disk space requirements, similar to what would be needed to create or rebuild an index, care should be taken in implementing compression as well. We also found that compressing the smallest objects first, from the list of objects you desire to compress, minimized the need for additional disk space during the compression process.
Data compression can be implemented via Transact-SQL or the Data Compression Wizard. To determine how compressing an object will affect its size, you can use the sp_estimate_data_compression_savings system stored procedure or the Data Compression Wizard to calculate the estimated space savings. Database compression is only supported in SQL Server 2008 Enterprise and Developer editions. It is implemented entirely within the database and does not require any application modification.
For more information about using compression, see Creating Compressed Tables and Indexes.

7
Policy-Based Management
In a number of business scenarios, there is a need to maintain certain configurations or enforce policies either within a specific SQL Server, or many times across a group of SQL Servers. A DBA or organization may require a particular naming convention to be implemented on all user tables or stored procedures that are created, or a required configuration change to be defined across a number of servers in the same manner.
Policy-Based Management (PBM) provides DBAs with a wide variety of options in managing their environment. Policies can be created and checked for compliance. If a target (such as a SQL Server database engine, a database, a table, or an index) is out of compliance, the administrator can automatically reconfigure it to be in compliance. There are also a number of evaluation modes (of which many are automated) that can help the DBA check for policy compliance, log and notify when a policy violation occurs, and even roll back the change to keep in compliance with the policy. For more information about evaluation modes and how they are mapped to facets (a PBM term also discussed in the blog), see theSQL Server Policy-Based Management blog.
The policies can be exported and imported as .xml files for evaluation and implementation across multiple server instances. Also, in SQL Server Management Studio and the Registered Servers view, policies can be evaluated across multiple servers if they are registered under a local server group or a Central Management Server group.
Not all of the functionality of Policy-Based Management can be implemented on earlier versions of SQL Server. However, the policy reporting feature can be utilized on SQL Server 2005 and SQL Server 2000. For more information about administering servers by using Policy-Based Management, see Administering Servers by Using Policy-Based Managementin SQL Server Books Online. For more information about the technology itself, including examples, see the SQL Server 2008 Compliance Guide.

8
Predictable Performance and Concurrency
A significant problem many DBAs face is trying to support SQL Servers with ever-changing workloads, and achieving some level of predictable performance (or minimizing variance in plans and performance). Unexpected query performance, plan changes, and/or general performance issues can come about due to a number of factors, including increased application load running against SQL Server or version upgrades of the database itself. Getting predictable performance from queries or operations run against SQL Server can greatly enhance the DBAs ability to meet and maintain availability, performance, and/or business continuity goals (OLAs or SLAs).
SQL Server 2008 provides a few feature changes that can help provide more predictable performance. In SQL Server 2008, there exist some enhancements to the SQL Server 2005 plan guides (or plan freezing) and a new option to control lock escalation at a table level. Both of these enhancements can provide a more predictable and structured interaction between the application and the database.
First, plan guides:
SQL Server 2005 enabled greater query performance stability and predictability by providing a new feature called plan guides to enable specifying hints for queries that could not be modified directly in the application. For more information, see the Forcing Query Plans white paper. While a very powerful feature, the USE PLAN query hint only supported SELECT DML operations and were often cumbersome to use due to the sensitivity of the plan guides to the formatting.
SQL Server 2008 builds on the plan guides mechanism in two ways: It expands the support for the USE PLAN query hint to cover all DML statements (INSERT, UPDATE, DELETE, MERGE), and it introduces a new plan freezing feature that can be used to directly create a plan guide (freeze) any query plan that exists in the SQL Server plan cache, as in the following example.
sp_create_plan_guide_from_handle
@name = N'MyQueryPlan',
@plan_handle = @plan_handle,
@statement_start_offset = @offset;
A plan guide created by either means has a database scope and is stored in the sys.plan_guides table. Plan guides are only used to influence the query plan selection process of the optimizer and do not eliminate the need for the query to be compiled. A new function, sys.fn_validate_plan_guide, has also been introduced to validate existing SQL Server 2005 plan guides and ensure their compatibility with SQL Server 2008. Plan freezing is available in the SQL Server 2008 Standard, Enterprise, and Developer editions.
Next, lock escalation:
Lock escalation has often caused blocking and sometimes even deadlocking problems, which the DBA is forced to troubleshoot and resolve. Previous versions of SQL Server permitted controlling lock escalation (trace flags 1211 and 1224), but this was only possible at an instance-level granularity. While this helped some applications work-around the problem, it caused severe issues for others. Another problem with the SQL Server 2005 lock escalation algorithm was that locks on partitioned tables were directly escalated to the table level, rather than the partition level.
SQL Server 2008 offers a solution for both of these problems. A new option has been introduced to control lock escalation at a table level. By using an ALTER TABLE command, option locks can be specified to not escalate, or escalate to the partition level for partitioned tables. Both these enhancements help improve the scalability and performance without having negative side-effects on other objects in the instance. Lock escalation is specified at the database-object level and does not require any application change. It is supported in all editions of SQL Server 2008.

9
Resource Governor
Maintaining a consistent level of service by preventing runaway queries and guaranteeing resources for mission-critical workloads has been a challenge. In the past there was no way of guaranteeing a certain amount of resources to a set of queries and prioritizing the access. All queries had equal access to all the available resources.
SQL Server 2008 introduces a new feature called Resource Governor, which helps address this issue by enabling users to differentiate workloads and allocate resources as they are requested. Resource Governor limits can easily be reconfigured in real time with minimal impact on the workloads that are executing. The allocation of the workload to a resource pool is configurable at the connection level, and the process is completely transparent to the application.
The diagram below depicts the resource allocation process. In this scenario three workload pools (Admin Workload, OLTP Workload, and Report Workload) are configured, and the OLTP Workload pool is assigned a high priority. In parallel, two resource pools (Admin Pool and Application Pool) are configured with specific memory and processor (CPU) limits as shown. As a final step the Admin Workload is assigned to the Admin Pool and the OLTP and Report workloads are assigned to the Application Pool.

  image

Below are some other points you need to consider when using Resource Governor.
  • Resource Governor relies on login credentials, host name, or application name as a ‘resource pool identifier’, so using a single login for an application, depending on the number of clients per server, might make creating pools more difficult.
  • Database-level object grouping, in which the resource governing is done based on the database objects being referenced, is not supported.
  • Resource Governor only allows resource management within a single SQL Server instance. For managing multiple SQL Server instances or processes within a server from a single source, Windows System Resource Manager should be considered.
  • Only processor and memory resources can be configured. I/O resources cannot be controlled.
  • Dynamically switching workloads between resource pools once a connection is made is not possible.
  • Resource Governor is only supported in SQL Server 2008 Enterprise and Developer editions and can only be used for the SQL Server database engine; SQL Server Analysis Services (SSAS), SQL Server Integration Services (SSIS), and SQL Server Reporting Services (SSRS) cannot be controlled.
10
Transparent Data Encryption (TDE)
Security is one of the top concerns of many organizations. There are many different layers to securing one of the most important assets of an organization: its data. In most cases, organizations do well at securing their active data via the use of physical security, firewalls, and tightly controlled access policies. However, when physical medium such as the backup tape or disk on which the data resides is compromised, the above security measures are of no use, because a rouge user can simply restore the database and get full access to the data.
SQL Server 2008 offers a solution to this problem by way of transparent data encryption (TDE). TDE performs real-time I/O encryption and decryption of the data and log files by using a database encryption key (DEK). The DEK is a symmetric key secured by using a certificate stored in the master database of the server, or an asymmetric key protected by an Extensible Key Management (EKM) module.
TDE is designed to protect data ‘at rest’, which means the data stored in the .mdf, .ndf, and .ldf files cannot be viewed using a hex editor or other means. However, data that is not at rest, such as the results of a SELECT statement in SQL Server Management Studio, will continue to be visible to users who have rights to view the table. Also, because TDE is implemented at the database level, the database can leverage indexes and keys for query optimization. TDE should not be confused with column-level encryption, which is a separate feature that allows encryption of data even when it is not at rest.
Encrypting a database is a one-time process that can be initiated via a Transact-SQL command or SQL Server Management Studio, and it is executed as a background thread. You can monitor the encryption or decryption status using thesys.dm_database_encryption_keys dynamic management view. In a lab test we conducted, we were able to encrypt a 100 GB database using the AES_128 encryption algorithm in about an hour. While the overhead of using TDE is largely dictated by the application workload, in some of the testing conducted that overhead was measured to be less than 5%. One potential performance impact to be aware of is this: If any database within the instance does have TDE applied, thetempDB system database is also encrypted. Finally, of note when combining features:
  • When backup compression is used to compress an encrypted database, the size of the compressed backup is larger than if the database were not encrypted, because encrypted data does not compress well.
  • Encrypting the database does not affect data compression (row or page).
TDE enables organizations to meet the demands of regulatory compliance and overall concern for data privacy. TDE is only supported in the SQL Server 2008 Enterprise and Developer editions and can be enabled without changing existing applications. For more information, see Database Encryption in SQL Server 2008 Enterprise Edition or the SQL Server 2008 Compliance Guide discussion on Using Transparent Data Encryption.

In conclusion, SQL Server 2008 offers features, enhancements, and functionality to help improve the Database Administrator experience. While a Top 10 list was provided above, there are many more features included within SQL Server 2008 that help improve the experience for DBA and other users alike. For a Top 10 feature set for other SQL Server focus areas, see the other SQL Server 2008 Top 10 articles on this site. For a full list of features and detailed descriptions, see SQL Server Books Online and the SQL Server 2008 Overview Web site.

Friday, October 21, 2011

Fixing damaged pages using page restore or manual inserts

Here's an interesting scenario that cropped up today. You have a database on a RAID array that failed and has zero'd out a page. How can you get the data back?




There are two ways to do it, depending on the database recovery model and version of SQL Server - single-page restore or manual insert/select - both of which rely on you having a backup of the database. You can use single-page restore if you're on SQL Server 2005 and the database is in Full or Bulk-Logged recovery mode, otherwise you need to use the manual method, and that will only work as long as you know the data being salvaged hasn't changed since the last backup.



Let's try them both. Here's a script to create a test database and make a backup of it:



-- Create the database.



USE master;



GO



CREATE DATABASE dbccpagetest;



GO



ALTER DATABASE dbccpagetest SET RECOVERY FULL;



GO



-- Create a table to play with.



USE dbccpagetest;



GO



CREATE TABLE sales (



salesID INT IDENTITY,



customerID INT DEFAULT CONVERT (INT, 100000 * RAND ()),



salesDate DATETIME DEFAULT GETDATE (),



salesAmount MONEY);



CREATE CLUSTERED INDEX salesCI ON sales (salesID);



GO



-- Populate the table



SET NOCOUNT ON;



GO



DECLARE @count INT



SELECT @count = 0



WHILE (@count < 5000)



BEGIN



INSERT INTO sales (salesAmount) VALUES (100 * RAND ());



SELECT @count = @count + 1



END;



GO



-- Take a full backup.



BACKUP DATABASE dbccpagetest TO DISK = 'C:\dbccpagetest.bak' WITH INIT;



GO



I'm going to simulate our scenario by shutting down the database and using a hex editor to zero out page 158 of the database. (This translates to byte offset 1294336 of the file being zero'd for 8192 bytes).



Now if I run checkdb, I get the following:



Msg 8909, Level 16, State 1, Line 1



Table error: Object ID 0, index ID -1, partition ID 0, alloc unit ID 0 (type Unknown), page ID (1:158) contains an incorrect page ID in its page header. The PageId in the page header = (0:0).



CHECKDB found 0 allocation errors and 1 consistency errors not associated with any single object.



Msg 8928, Level 16, State 1, Line 1



Object ID 2073058421, index ID 1, partition ID 72057594038386688, alloc unit ID 72057594042384384 (type In-row data): Page (1:158) could not be processed. See other errors for details.



CHECKDB found 0 allocation errors and 1 consistency errors in table 'sales' (object ID 2073058421).



CHECKDB found 0 allocation errors and 2 consistency errors in database 'dbccpagetest'.



repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB (dbccpagetest).



What does the page look like?



DBCC TRACEON (3604);



GO



DBCC PAGE (dbccpagetest, 1, 158, 3);



GO



DBCC execution completed. If DBCC printed error messages, contact your system administrator.



PAGE: (0:0)



BUFFER:



BUF @0x02C0632C



bpage = 0x04C12000 bhash = 0x00000000 bpageno = (1:158)



bdbid = 9 breferences = 0 bUse1 = 37241



bstat = 0xc00009 blog = 0x89898989 bnext = 0x00000000



PAGE HEADER:



Page @0x04C12000



m_pageId = (0:0) m_headerVersion = 0 m_type = 0



m_typeFlagBits = 0x0 m_level = 0 m_flagBits = 0x200



m_objId (AllocUnitId.idObj) = 0 m_indexId (AllocUnitId.idInd) = 0 Metadata: AllocUnitId = 0



Metadata: PartitionId = 0 Metadata: IndexId = -1 Metadata: ObjectId = 0



m_prevPage = (0:0) m_nextPage = (0:0) pminlen = 0



m_slotCnt = 0 m_freeCnt = 0 m_freeData = 0



m_reservedCnt = 0 m_lsn = (0:0:0) m_xactReserved = 0



m_xdesId = (0:0) m_ghostRecCnt = 0 m_tornBits = 16777216



Allocation Status



GAM (1:2) = ALLOCATED SGAM (1:3) = NOT ALLOCATED



PFS (1:1) = 0x60 MIXED_EXT ALLOCATED 0_PCT_FULL DIFF (1:6) = NOT CHANGED



ML (1:7) = NOT MIN_LOGGED



Msg 2514, Level 16, State 5, Line 2



DBCC PAGE error: Invalid page type - dump style 3 not possible.



Note the error at the end of the output - DBCC PAGE can't do an in-depth dump because it doesn't know what page type the page is. Let's try a full page hex dump using dump style 2 instead:



DBCC PAGE (dbccpagetest, 1, 158, 2);



GO



PAGE: (0:0)







DATA:



Memory Dump @0x44F3C000



44F3C000: 00000000 00020000 00000000 00000000 †................



44F3C010: 00000000 00000000 00000000 00000000 †................







44F3DFE0: 00000000 00000000 00000000 00000000 †................



44F3DFF0: 00000000 00000000 00000000 00000000 †................



DBCC execution completed. If DBCC printed error messages, contact your system administrator.



It really is all zero. First we'll fix it using page restore.



USE master;



GO



RESTORE DATABASE dbccpagetest PAGE = '1:158' FROM DISK = 'C:\dbccpagetest.bak';



GO



Processed 1 pages for database 'dbccpagetest', file 'dbccpagetest' on file 1.



The roll forward start point is now at log sequence number (LSN) 32000000047000001. Additional roll forward past LSN 33000000001700001 is required to complete the restore sequence.



RESTORE DATABASE ... FILE= successfully processed 1 pages in 0.176 seconds (0.046 MB/sec).



Isn't that cool? You can restore up to 1000 single pages from a backup at a time. For VLDBs, this cuts the recovery time WAY down. Now we need to roll forward the log. We don't have any more log backups so we can finish the roll forward by backing up and restoring the tail of the log.



-- Need to complete roll forward. Backup the log tail...



BACKUP LOG dbccpagetest TO DISK = 'C:\dbccpagetest_log.bak' WITH INIT;



GO



-- ... and restore it again.



RESTORE LOG dbccpagetest FROM DISK = 'C:\dbccpagetest_log.bak';



GO



Processed 5 pages for database 'dbccpagetest', file 'dbccpagetest_log' on file 1.



BACKUP LOG successfully processed 5 pages in 0.146 seconds (0.248 MB/sec).



Processed 0 pages for database 'dbccpagetest', file 'dbccpagetest' on file 1.



RESTORE LOG successfully processed 0 pages in 0.004 seconds (0.000 MB/sec).



And now we should have a clean database:

DBCC CHECKDB (dbccpagetest) WITH NO_INFOMSGS;



GO



Command(s) completed successfully.



Easy. But what if we can't do a page restore? Assuming I've corrupted the database in exactly the same way again, the first thing is to do is make sure we can restore the backup and then see what data range is on that page:



RESTORE DATABASE dbccpagetest_copy FROM DISK = 'C:\dbccpagetest.bak' WITH



MOVE N'dbccpagetest' TO N'C:\dbccpagetest_copy.mdf',



MOVE N'dbccpagetest_log' TO N'C:\dbccpagetest_log.ldf',



REPLACE;



GO



DBCC PAGE (dbccpagetest_copy, 1, 158, 3);



GO



Processed 184 pages for database 'dbccpagetest_copy', file 'dbccpagetest' on file 1.



Processed 2 pages for database 'dbccpagetest_copy', file 'dbccpagetest_log' on file 1.



RESTORE DATABASE successfully processed 186 pages in 0.361 seconds (4.205 MB/sec).



PAGE: (1:158)



BUFFER:



BUF @0x02BE8D38



bpage = 0x03FB4000 bhash = 0x00000000 bpageno = (1:158)



bdbid = 10 breferences = 1 bUse1 = 38283



bstat = 0xc00009 blog = 0x159a2159 bnext = 0x00000000



PAGE HEADER:



Page @0x03FB4000



m_pageId = (1:158) m_headerVersion = 1 m_type = 1



m_typeFlagBits = 0x4 m_level = 0 m_flagBits = 0x8200



m_objId (AllocUnitId.idObj) = 68 m_indexId (AllocUnitId.idInd) = 256



Metadata: AllocUnitId = 72057594042384384



Metadata: PartitionId = 72057594038386688 Metadata: IndexId = 1



Metadata: ObjectId = 2073058421 m_prevPage = (1:157) m_nextPage = (1:159)



pminlen = 28 m_slotCnt = 245 m_freeCnt = 11



m_freeData = 7691 m_reservedCnt = 0 m_lsn = (24:453:8)



m_xactReserved = 0 m_xdesId = (0:0) m_ghostRecCnt = 0



m_tornBits = -1020457745



Allocation Status



GAM (1:2) = ALLOCATED SGAM (1:3) = NOT ALLOCATED



PFS (1:1) = 0x60 MIXED_EXT ALLOCATED 0_PCT_FULL DIFF (1:6) = NOT CHANGED



ML (1:7) = NOT MIN_LOGGED



Slot 0 Offset 0x60 Length 31



Record Type = PRIMARY_RECORD Record Attributes = NULL_BITMAP



Memory Dump @0x4542C060



00000000: 10001c00 d5030000 5bd30000 3f771101 †........[...?w..



00000010: b9980000 baa10a00 00000000 0500e0††††...............



UNIQUIFIER = [NULL]



Slot 0 Column 1 Offset 0x4 Length 4



salesID = 981



Slot 0 Column 2 Offset 0x8 Length 4



customerID = 54107



Slot 0 Column 3 Offset 0xc Length 8



salesDate = Jan 17 2007 4:35PM



Slot 0 Column 4 Offset 0x14 Length 8



salesAmount = 69.68







Slot 244 Offset 0x1dec Length 31



Record Type = PRIMARY_RECORD Record Attributes = NULL_BITMAP



Memory Dump @0x4542DDEC



00000000: 10001c00 c9040000 bfa10000 57771101 †............Ww..



00000010: b9980000 c6b80500 00000000 0500e0††††...............



UNIQUIFIER = [NULL]



Slot 244 Column 1 Offset 0x4 Length 4



salesID = 1225



Slot 244 Column 2 Offset 0x8 Length 4



customerID = 41407



Slot 244 Column 3 Offset 0xc Length 8



salesDate = Jan 17 2007 4:35PM



Slot 244 Column 4 Offset 0x14 Length 8



salesAmount = 37.50



DBCC execution completed. If DBCC printed error messages, contact your system administrator.



So we're looking at salesID range 981 to 1225 inclusive. Before we can copy the rows back to the damaged database, we need to get rid of the corrupt page. Repair should delete the page for us. First I'm going to take another backup though - just in case something goes wrong!

BACKUP DATABASE dbccpagetest TO DISK = 'C:\dbccpagetest_corrupt.bak' WITH INIT;



GO



ALTER DATABASE dbccpagetest SET SINGLE_USER;



GO



DBCC CHECKDB (dbccpagetest, REPAIR_ALLOW_DATA_LOSS) WITH NO_INFOMSGS;



GO



ALTER DATABASE dbccpagetest SET MULTI_USER;



GO



Processed 184 pages for database 'dbccpagetest', file 'dbccpagetest' on file 1.



Processed 4 pages for database 'dbccpagetest', file 'dbccpagetest_log' on file 1.



BACKUP DATABASE successfully processed 188 pages in 0.380 seconds (4.052 MB/sec).



Msg 8909, Level 16, State 1, Line 1



Table error: Object ID 0, index ID -1, partition ID 0, alloc unit ID 0 (type Unknown), page ID (1:158) contains an incorrect page ID in its page header. The PageId in the page header = (0:0).



The error has been repaired.



CHECKDB found 0 allocation errors and 1 consistency errors not associated with any single object.



CHECKDB fixed 0 allocation errors and 1 consistency errors not associated with any single object.



Repair: The Clustered index successfully rebuilt for the object "dbo.sales" in database "dbccpagetest".



Repair: The page (1:158) has been deallocated from object ID 2073058421, index ID 1, partition ID 72057594038386688, alloc unit ID 72057594042384384 (type In-row data).



Msg 8945, Level 16, State 1, Line 1



Table error: Object ID 2073058421, index ID 1 will be rebuilt.



The error has been repaired.



Msg 8928, Level 16, State 1, Line 1



Object ID 2073058421, index ID 1, partition ID 72057594038386688, alloc unit ID 72057594042384384 (type In-row data): Page (1:158) could not be processed. See other errors for details.



The error has been repaired.



Msg 8976, Level 16, State 1, Line 1



Table error: Object ID 2073058421, index ID 1, partition ID 72057594038386688, alloc unit ID 72057594042384384 (type In-row data). Page (1:158) was not seen in the scan although its parent (1:154) and previous (1:157) refer to it. Check any previous errors.



The error has been repaired.



Msg 8978, Level 16, State 1, Line 1



Table error: Object ID 2073058421, index ID 1, partition ID 72057594038386688, alloc unit ID 72057594042384384 (type In-row data). Page (1:159) is missing a reference from previous page (1:158). Possible chain linkage problem.



The error has been repaired.



CHECKDB found 0 allocation errors and 3 consistency errors in table 'sales' (object ID 2073058421).



CHECKDB fixed 0 allocation errors and 3 consistency errors in table 'sales' (object ID 2073058421).



CHECKDB found 0 allocation errors and 4 consistency errors in database 'dbccpagetest'.



CHECKDB fixed 0 allocation errors and 4 consistency errors in database 'dbccpagetest'.



We should check the row count to see that the count has dropped from the initial 5000 rows we inserted:



USE dbccpagetest;



GO

SELECT COUNT (*) FROM SALES;



GO



SELECT COUNT (*) FROM sales WHERE salesID > 980 AND salesID < 1226;



GO



And we're down to 4755 rows, as expected with zero rows in that range. All we need to do now is to copy the missing rows over from the restored copy. Remember, this will only work if you know that the data being salvaged hasn't changed since the backup was taken - otherwise you'll have old and new data mixed in the table which will play havoc with your business. Before we copy the rows, we know we're got an identity column we'd like to preserve so we set IDENTITY_INSERT on which tells the server not to generate new identity values for the inserted rows.

-- Make sure identity values survive.



SET IDENTITY_INSERT sales ON;



GO



-- Insert the missing rows.



SET NOCOUNT OFF;



GO



INSERT INTO sales (salesID, customerID, salesDate, salesAmount)



SELECT * FROM dbccpagetest_copy.dbo.sales AS R



WHERE R.salesID > 980 AND R.salesID < 1226;



GO



-- Restore identity behavior.



SET IDENTITY_INSERT sales OFF;



GO



(245 row(s) affected)



We copy over 245 rows and checking the row count again says we're back to 5000 rows.




Sunday, August 21, 2011

Moving data from the mdf file to the ndf file(s)

 SCENARIO:
1)You have an MDF file from a database which has grown up to a size that has almost exhausted the disk space.
2)You now want to keep the MDF from growing further.
3)You also want to transfer the data from the MDF file to the NDF file(s) and shrink the MDF file size.

METHOD 1: (Permanent Solution, may take a long time to complete)
1. This activity comes with a requirement that no users should connect to the database during the maintenance activity.
2. Identify the table(s) that is taking up all the size and see if it has a clustered index (using SQL Server Management Studio)
3. If yes, move the tables in the following manner:
a. Create a new file group and a file (in the new file group) on a different disk with a large initial size, at least twice the size of the data you are moving onto that file.
b. Run the below command
sp_help --Identify the Clustered Index_name
-- For SQL Server 2000.
CREATE CLUSTERED INDEX
ON ()
WITH DROP_EXISTING
ON []
-- For SQL Server 2005 and onwards.
CREATE CLUSTERED INDEX
ON ()
WITH (DROP_EXISTING=ON)
ON []
4. Shrink that database MDF file , to free the space to the Operating System.
DBCC SHRINKFILE (1, NOTRUNCATE) --This moves allocated pages from the end of the data file to unallocated pages in the front of the file.
DBCC SHRINKFILE (1, TRUNCATEONLY) --Releases all free space at the end of the file to the operating system.
METHOD 2:  (Temporary workaround to keep the MDF from growing further)
ü Create about 2 more files in the same file group but different disks with a large initial file size. (if already present , manually increase their file size to a fairly large size)
ü Bigger the size of the other NDF files, more is the available free space; more the free space available in NDFs , lesser will the MDF file grow. (This will be in accordance to Proportional Fill Algorithm.)
ü The word large used above is relative to environments, so we suggest you to do capacity planning for the same. ( Failing which, increase the NDF file sizes to at least about twice the size of the MDF.)
Filegroups use a proportional fill strategy across all the files within each filegroup. As data is written to the filegroup, the SQL Server Database Engine writes an amount proportional to the free space in the file to each file within the filegroup, instead of writing all the data to the first file until full. It then writes to the next file. For example, if file f1 has 100 MB free and file f2 has 200 MB free, one extent is allocated from file f1, two extents from file f2, and so on. In this way, both files become full at about the same time, and simple striping is achieved.
As soon as all the files in a filegroup are full, the Database Engine automatically expands one file at a time in a round-robin manner to allow for more data, provided that the database is set to grow automatically. For example, a filegroup is made up of three files, all set to automatically grow. When space in all the files in the filegroup is exhausted, only the first file is expanded. When the first file is full and no more data can be written to the filegroup, the second file is expanded. When the second file is full and no more data can be written to the filegroup, the third file is expanded. If the third file becomes full and no more data can be written to the filegroup, the first file is expanded again, and so on.
Please refer the following link for more details:

Orphaned Users with Database Mirroring and Log Shipping

Let’s start with a little background on how SQL Server logins and Database users actually work. A login (Windows/SQL  authenticated) gives you the permissions to log onto a SQL Server instance. With the help of the login, an individual can access the databases on that instance using a user mapping. The only exceptions to this rule are:
a.       GUEST account
b.       Microsoft Windows group memberships
All other logins have the requirement that they have a mapping to a database user account. The mapping information is stored in the database along with the login name and the login SID that the database user is mapped. This information can be found out using the sys.server_principals catalog view (This gives you a list of logins created for the current instance) and sys.database_principals (This gives you a list of database users along with the principal ID and SID.)
Most organizations use Windows Authentication and after a database failover, users are able to connect seamlessly. Same is the case when the secondary database for log shipping is made the primary after a role change. When SQL Authentication is used to connect to the mirrored database after a failover or a log shipped database after a role change and the login in question has only logon rights on the destination database, then it becomes a different ballgame altogether.
SQL Server maps database users and server logins using SID values. The SID for Windows Authentications (Domain Accounts) are the same throughout the environment hence a problem never arises here when the roles (Mirror to Principal or Secondary to Primary) change. But for SQL Logins, rule of thumb tells us that the login used to login to a mirrored/secondary database should exist on both instances which hosts the principal/primary and mirror/secondary databases.
So, what happens when you use a SQL Authentication after a failover/role change??
Since, SQL Server relies on SIDs to map a database user to a SQL login. So after a failover the SIDs for a SQL Authentication is bound to have a mismatch (as SQL Authentication SIDs are machine specific and not something derived from the AD as in the case of Domain Accounts), you will start getting login failed or login timeout period expired for SQL Authentication. Some of the error messages can be of the following type:
ERROR #1:
Microsoft SQL Native Client error '80040e09'
The EXECUTE permission was denied on the object

ERROR #2:
OLE DB provider "SQLNCLI" for linked server "" returned message
"Login timeout expired".
Msg 18456, Level 14, State 1, Line 1
Login failed for user ''.
Msg 4060, Level 11, State 1, Line 1
Cannot open database "" requested by the login. The login failed.
If you look at the database user properties using Object Explorer in Management Studio, you will find that the user doesn’t correspond to a login.

image
How do I resolve this issue?
First identify the orphaned database users. This can be done using two methods:
Method 1:
Execute the script below under the context of the database in question.
select [name],SUSER_SNAME(SID) as Resolved_SID
from sys.database_principals
where type_desc = 'SQL_USER'
and SID not in (select SID from sys.server_principals)
and [name] <> 'guest'
This would give you the database user(s) that is/are orphaned and a NULL under the Resolved_SID column value in the above query output.
Method 2:
Use the sp_change_users_login stored procedure and execute the below query under the context of the database in question.
sp_change_users_login @Action='Report';
This would give you the orphaned database users along with the SID that the database user was mapped to.
How do I Workaround this issue if I cannot permanently fix this due to lack of downtime?
The Troubleshooting Orphaned Users section in SQL Server Books Online has information on how to deal with orphaned logins and fix them. However, for such situations this would be a temporary fix because the problem would raise its ugly head every time a role change or failover happens. So, if you cannot follow the steps mentioned in this post for fixing the issue permanently, use the steps mentioned in “To Resolve an Orphaned User” under the Troubleshooting Orphaned Users topic in SQL Server Books Online as a temporary solution till you get downtime to implement the permanent fix mentioned below.
How do I know that this is my actual problem?
Execute the following query on the two instances Principal/Primary and Mirror/Secondary.
select [name],SID from sys.server_principals
where type_desc = 'SQL_LOGIN'
For the logins in question, you will find that the SIDs on both the instances are different.
Now, to the how do I fix this issue permanently J
1.       You will have to create the sp_help_revlogin stored procedure under master. The script is available under KB918992.
2.       After the stored procedure is created, use it to script out the login(s) in question from the Principal/Primary server instance. Eg. EXEC master.dbo.sp_help_revlogin 'testuser'
3.       Drop the Login(s) on the Secondary/Mirror instance. NOTE: There is a caveat here!! If the login is associated with some specific privileges and database roles, then you would have to explicitly grant/map them. The CREATE LOGIN script obtained using the sp_help_revlogin Stored Procedure doesn’t script out the permissions.
4.       Re-create the Login(s) using the CREATE LOGIN scripts obtained above on the Secondary/Mirror server instance..
5.       Perform a database failover/role change and re-map the SQL Authenticated Login(s) to the database user under User Mappings. This is because if you have a database user and drop the SQL Login associated with it, then the database user again becomes orphaned.
The biggest problem that you would face is mapping the database user to the newly created login. Since, the Mirrored/Secondary database is not in Read/Write mode, you cannot make changes to the database till you failover or perform a role change.
The above article applies for SQL Server 2005 & 2008 (all editions where Mirroring is offered) for Database Mirroring and SQL Server 2000, 2005 & 2008 (all editions where Log Shipping is offered) for Log Shipping.