Sunday, August 21, 2011

Moving data from the mdf file to the ndf file(s)

 SCENARIO:
1)You have an MDF file from a database which has grown up to a size that has almost exhausted the disk space.
2)You now want to keep the MDF from growing further.
3)You also want to transfer the data from the MDF file to the NDF file(s) and shrink the MDF file size.

METHOD 1: (Permanent Solution, may take a long time to complete)
1. This activity comes with a requirement that no users should connect to the database during the maintenance activity.
2. Identify the table(s) that is taking up all the size and see if it has a clustered index (using SQL Server Management Studio)
3. If yes, move the tables in the following manner:
a. Create a new file group and a file (in the new file group) on a different disk with a large initial size, at least twice the size of the data you are moving onto that file.
b. Run the below command
sp_help --Identify the Clustered Index_name
-- For SQL Server 2000.
CREATE CLUSTERED INDEX
ON ()
WITH DROP_EXISTING
ON []
-- For SQL Server 2005 and onwards.
CREATE CLUSTERED INDEX
ON ()
WITH (DROP_EXISTING=ON)
ON []
4. Shrink that database MDF file , to free the space to the Operating System.
DBCC SHRINKFILE (1, NOTRUNCATE) --This moves allocated pages from the end of the data file to unallocated pages in the front of the file.
DBCC SHRINKFILE (1, TRUNCATEONLY) --Releases all free space at the end of the file to the operating system.
METHOD 2:  (Temporary workaround to keep the MDF from growing further)
ü Create about 2 more files in the same file group but different disks with a large initial file size. (if already present , manually increase their file size to a fairly large size)
ü Bigger the size of the other NDF files, more is the available free space; more the free space available in NDFs , lesser will the MDF file grow. (This will be in accordance to Proportional Fill Algorithm.)
ü The word large used above is relative to environments, so we suggest you to do capacity planning for the same. ( Failing which, increase the NDF file sizes to at least about twice the size of the MDF.)
Filegroups use a proportional fill strategy across all the files within each filegroup. As data is written to the filegroup, the SQL Server Database Engine writes an amount proportional to the free space in the file to each file within the filegroup, instead of writing all the data to the first file until full. It then writes to the next file. For example, if file f1 has 100 MB free and file f2 has 200 MB free, one extent is allocated from file f1, two extents from file f2, and so on. In this way, both files become full at about the same time, and simple striping is achieved.
As soon as all the files in a filegroup are full, the Database Engine automatically expands one file at a time in a round-robin manner to allow for more data, provided that the database is set to grow automatically. For example, a filegroup is made up of three files, all set to automatically grow. When space in all the files in the filegroup is exhausted, only the first file is expanded. When the first file is full and no more data can be written to the filegroup, the second file is expanded. When the second file is full and no more data can be written to the filegroup, the third file is expanded. If the third file becomes full and no more data can be written to the filegroup, the first file is expanded again, and so on.
Please refer the following link for more details:

Orphaned Users with Database Mirroring and Log Shipping

Let’s start with a little background on how SQL Server logins and Database users actually work. A login (Windows/SQL  authenticated) gives you the permissions to log onto a SQL Server instance. With the help of the login, an individual can access the databases on that instance using a user mapping. The only exceptions to this rule are:
a.       GUEST account
b.       Microsoft Windows group memberships
All other logins have the requirement that they have a mapping to a database user account. The mapping information is stored in the database along with the login name and the login SID that the database user is mapped. This information can be found out using the sys.server_principals catalog view (This gives you a list of logins created for the current instance) and sys.database_principals (This gives you a list of database users along with the principal ID and SID.)
Most organizations use Windows Authentication and after a database failover, users are able to connect seamlessly. Same is the case when the secondary database for log shipping is made the primary after a role change. When SQL Authentication is used to connect to the mirrored database after a failover or a log shipped database after a role change and the login in question has only logon rights on the destination database, then it becomes a different ballgame altogether.
SQL Server maps database users and server logins using SID values. The SID for Windows Authentications (Domain Accounts) are the same throughout the environment hence a problem never arises here when the roles (Mirror to Principal or Secondary to Primary) change. But for SQL Logins, rule of thumb tells us that the login used to login to a mirrored/secondary database should exist on both instances which hosts the principal/primary and mirror/secondary databases.
So, what happens when you use a SQL Authentication after a failover/role change??
Since, SQL Server relies on SIDs to map a database user to a SQL login. So after a failover the SIDs for a SQL Authentication is bound to have a mismatch (as SQL Authentication SIDs are machine specific and not something derived from the AD as in the case of Domain Accounts), you will start getting login failed or login timeout period expired for SQL Authentication. Some of the error messages can be of the following type:
ERROR #1:
Microsoft SQL Native Client error '80040e09'
The EXECUTE permission was denied on the object

ERROR #2:
OLE DB provider "SQLNCLI" for linked server "" returned message
"Login timeout expired".
Msg 18456, Level 14, State 1, Line 1
Login failed for user ''.
Msg 4060, Level 11, State 1, Line 1
Cannot open database "" requested by the login. The login failed.
If you look at the database user properties using Object Explorer in Management Studio, you will find that the user doesn’t correspond to a login.

image
How do I resolve this issue?
First identify the orphaned database users. This can be done using two methods:
Method 1:
Execute the script below under the context of the database in question.
select [name],SUSER_SNAME(SID) as Resolved_SID
from sys.database_principals
where type_desc = 'SQL_USER'
and SID not in (select SID from sys.server_principals)
and [name] <> 'guest'
This would give you the database user(s) that is/are orphaned and a NULL under the Resolved_SID column value in the above query output.
Method 2:
Use the sp_change_users_login stored procedure and execute the below query under the context of the database in question.
sp_change_users_login @Action='Report';
This would give you the orphaned database users along with the SID that the database user was mapped to.
How do I Workaround this issue if I cannot permanently fix this due to lack of downtime?
The Troubleshooting Orphaned Users section in SQL Server Books Online has information on how to deal with orphaned logins and fix them. However, for such situations this would be a temporary fix because the problem would raise its ugly head every time a role change or failover happens. So, if you cannot follow the steps mentioned in this post for fixing the issue permanently, use the steps mentioned in “To Resolve an Orphaned User” under the Troubleshooting Orphaned Users topic in SQL Server Books Online as a temporary solution till you get downtime to implement the permanent fix mentioned below.
How do I know that this is my actual problem?
Execute the following query on the two instances Principal/Primary and Mirror/Secondary.
select [name],SID from sys.server_principals
where type_desc = 'SQL_LOGIN'
For the logins in question, you will find that the SIDs on both the instances are different.
Now, to the how do I fix this issue permanently J
1.       You will have to create the sp_help_revlogin stored procedure under master. The script is available under KB918992.
2.       After the stored procedure is created, use it to script out the login(s) in question from the Principal/Primary server instance. Eg. EXEC master.dbo.sp_help_revlogin 'testuser'
3.       Drop the Login(s) on the Secondary/Mirror instance. NOTE: There is a caveat here!! If the login is associated with some specific privileges and database roles, then you would have to explicitly grant/map them. The CREATE LOGIN script obtained using the sp_help_revlogin Stored Procedure doesn’t script out the permissions.
4.       Re-create the Login(s) using the CREATE LOGIN scripts obtained above on the Secondary/Mirror server instance..
5.       Perform a database failover/role change and re-map the SQL Authenticated Login(s) to the database user under User Mappings. This is because if you have a database user and drop the SQL Login associated with it, then the database user again becomes orphaned.
The biggest problem that you would face is mapping the database user to the newly created login. Since, the Mirrored/Secondary database is not in Read/Write mode, you cannot make changes to the database till you failover or perform a role change.
The above article applies for SQL Server 2005 & 2008 (all editions where Mirroring is offered) for Database Mirroring and SQL Server 2000, 2005 & 2008 (all editions where Log Shipping is offered) for Log Shipping.

Steps to Enable AWE on Sql server 2005

Here are steps to enable AWE so that Sql Server 2005 can use additional memory, if your physical memory is more than 4 GB on 32-bit Windows 2003 box.
1. Add /pae at the end of last line inside c:\boot.ini
2. Run gpedit.msc. On the left hand pane, expand Computer Configuration, expand Windows Settings, expand Security Settings, expand Local Policies, select User Rights Assignment
3. On the right hand pane, find Lock pages in memory and double click, then add your Sql Server startup account into Local Security Policy Setting tab;
4. In Sql Server Management Studio, run:
sp_configure ‘show advanced’, 1
reconfigure
sp_configure ‘awe enabled’, 1
reconfigure
Ignore the error message below, if you have it.
Msg 5845, Level 16, State 1, Line 1
Address Windowing Extensions (AWE) requires the ‘lock pages in memory’ privilege which is not currently present in the access token of the process.
5. Reboot.

Thursday, August 18, 2011

Windows domains and domain users


a domain is a group of computers that run on Windows operating systems. Amongst them is a computer that holds all the information related to user authentication and user database and is called the domain controller (server), whereas every user who is part of this user database on the domain controller is called a domain user. Domain users have access to any resource across the domain and its subdomains with the privilege they have, unlike the standalone user who has access to the resources available to a specific system.
With the release of Windows Server 2000, Microsoft released Active Directory (AD), which is now widely used with Windows operating system networks to store, authenticate, and control users who are part of the domain. A Windows domain uses various modes to authenticate users—encrypted passwords, various handshake methods such as PKI, Kerberos, EAP, SSL certificates, NAP, LDAP, and IP Sec policy—and makes it robust authentication. One can choose the authentication method that suits business needs and based on the environment.
Let's now see various authentication methods in detail.
Public Key Infrastructure (PKI): This is the most common method used to transmit data over insecure channels such as the Internet using digital certificates. It has generally two parts—the public and private keys. These keys are generated by a Certificate Authority, such as, Thawte. Public keys are stored in a directory where they are accessible by all parties. The public key is used by the message sender to send encrypted messages, which then can be decrypted using the private key.
Kerberos: This is an authentication method used in client server architecture to authorize the client to use service(s) on a server in a network. In this method, when a client sends a request to use a service to a server, a request goes to the authentication server, which will generate a session key and a random value based on the username. This session key and a random value are then passed to the server, which grants or rejects the request. These sessions are for certain time period, which means for that particular amount of time the client can use the service without having to re-authenticate itself.
Extensible Authentication Protocol (EAP): This is an authentication protocol generally used in wireless and point-to-point connections.


SSL Certificates: A Secure Socket Layer certificate (SSL) is a digital certificate that is used to identify a website or server that provides a service to clients and sends the data in an encrypted form. SSL certificates are typically used by websites such as GMAIL. When we type a URL and press Enter, the web browser sends a request to the web server to identify itself. The web server then sends a copy of its SSL certificate, which is checked by the browser. If the browser trusts the certificate (this is generally done on the basis of the CA and Registration Authority and directory verification), it will send a message back to the server and in reply the web server sends an acknowledgement to the browser to start an encrypted session.
Network Access Protection (NAP): This is a new platform introduced by Microsoft with the release of Windows Server 2008. It will provide access to the client, based on the identity of the client, the group it belongs to, and the level compliance it has with the policy defined by the Network Administrators. If the client doesn't have a required compliance level, NAP has mechanisms to bring the client to the compliance level dynamically and allow it to access the network.
Lightweight Directory Access Protocol (LDAP): This is a protocol that runs over TCP/IP directly. It is a set of objects, that is, organizational units, printers, groups, and so on. When the client sends a request for a service, it queries the LDAP server to search for availability of information, and based on that information and level of access, it will provide access to the client.
IP Security (IPSEC): IP Security is a set of protocols that provides security at the network layer. IP Sec provides two choices:
Authentication Header: Here it encapsulates the authentication of the sender in a header of the network packet.
Encapsulating Security Payload: Here it supports encryption of both the header and data.
Now that we know basic information on Windows domains, domain users, and various authentication methods used with Windows servers

Attaching and Detaching databases in SQL Server


There are a lot of options available to the database developer/DBA to move the databases from one instance to another instance.  Backup and restore is one of them, copying them over using DTS or SSIS is another one, having your DDL and DML copy scripts is yet another way to do it.  One of the popular ways to move databases to a different instance on the same server or to a different server is using sp_detachdb and sp_attach_db system stored procedures.  In this blog post, we will go through the steps involved in detaching and attaching databases.
To start with we will create test database. Please log on to Query analyzer or Management studio with proper authorization and issue following command to create the database. Just make sure that you have the proper privileges (CREATE DATABASE OR CREATE ANY DATABASE) assigned or the login that you are using is a member of the sysadmin or dbcreator fixed server roles.  Let’s keep it simple and create a sample TEST database:
USE MASTER
GO
CREATE DATABASE TEST
GO
Now run sp_helpfile system stored procedure to check how many files are created and location of the files.
USE TEST
GO
SP_HELPFILE
Here is the abbreviated output.
TEST        C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\TEST.mdf
TEST_log    C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\ TEST_log.LDF
Output indicates that data file and log file are created in the default location. Once we created the database, we realized that there will not be enough space on c:\ drive so we need to move database files to the d:\ drive. We will achieve this using sp_detach_db and sp_attach_db.
sp_detach_db takes two or three arguments depending upon SQL Sever version. In SQL Server 2000, it takes only two arguments and in 2005 it sakes three arguments.
/* SQLServer 2000 */
sp_detach_db @dbname = ‘dbname’, @skipchecks = ‘skipchecks’
/* SQLServer 2005*/
sp_detach_db @dbname= ‘database_name’, @skipchecks = ‘skipchecks’,
@keepfulltextindexfile = ‘KeepFulltextIndexFile’
•    @dbname is name of the database we want to detach.
•    @skipchecks indicates whether we want to skip UPDATE STATISTICS during detach process or not. It is optional parameter with default value of NULL. If it set to ‘true’, it will skip the UPDATE STATISTICS, if it is set to false, it will run the UPDATE STATISTICS.
•    @ keepfulltextindexfile is also an optional parameter with a default value of true. If it is set to false, it will drop the full text index files and metadata associated with it. It will keep the metadata if option is null or set to true.
Major restriction to detach the datbase is that database should not be in use. You cannot detach the database which is being used actively. In order to detach the databas,e you need to kill all the connections and set the database in single user mode so that further connections are prohibited. For complete list of restricions, please refer to BOL. Following is an example of detaching the database in SQLServer 2005.
USE MASTER
GO
exec sp_detach_db ‘TEST’,'true’,'true’
Now let us move the file to different location on d: drive (D:\SQServer\Data) and then we will use sp_attach_db procedure to attach the database back. Only users who are member of sysadmin and dbcreator fixed server roles can execute this procedure.
sp_attach_db @dbname = ‘dbname’ @filename1 = ‘filename_n’ [,...16]
sp_attach_db can take upto 17 arguments. Database name and 16 file names. If there are more than 16 data files then one has to use CREATE DATABASE command with FOR ATTACH clause to attach the database (please note that in future versions, the CREATE DATABASE … FOR ATTACH command will be the preferred way to go as BOL states that the sp_attach_db command will be deprecated). Execute following procedure to attach the database back.
EXEC sp_attach_db @dbname = N’TEST’,
@filename1 = N’d:\SQLServer\Data\TEST.mdf’,
@filename2 = N’d:\SQLServer\Data\TEST_log.LDF’
GO
Once database is attached and used, you can remove the files from the previous location if you have copied it to the new location to avoid the confusion. Sp_helpfile will indicate the new location of data files. You can also run DBCC CHECKDB command to verify the sanctity of the attached database.
If database is replicated, then one should take little extra care during detach and attach process to make sure that you don’t run into any restricion scenarios. For replicataion consideration, please refer to BOL.  This is much a easier and faster way to move the database to different location compare to backup and restore.
Couple of things to keep in mind:
1)    If you are moving databases from one instance to another, you have to take care of the orphaned users in the database i.e. you will need to re-run a script to aling the logins with the users in the database.
2)    You should update the statistics with a FULL SCAN once the attach is done.
3)    Beware of different collation settings between the source and the destination instance.

Monday, August 15, 2011

Difference between truncating and shrinking the transaction log file


Truncating the transaction log:


Truncating a log file will change one or more VLF files status from active to inactive. By default, the SQL server has change the VLF from active to inactive and reuses it automatically (Just like a circular). 


Shrinking the transaction log:

       Shrinking the log file will reduce the physical log file size. Shrinking the log file is not best practice until you don’t have any other option.


Truncating the transaction log:
 
When does it truncate and reuse it automatically?

It depends upon the recovery model.

  •  SQL server run a CHECKPOINT  an every minute truncate/mark the inactive portion of the VLF files and reuse it automatically, When a database is in simple recovery model.
  • SQL server run the CHECKPOINT an every time truncate/mark the inactive portion of the VLF files and reuse it automatically, When you perform the log backup in under the FULL or BULK LOGGED Recovery model.
You can truncate the log file(VLF) manually by using the TRUNCATE_ONLY or  NO_LOG commands.

Here is the test:

Create DATABASE test


ALTER DATABASE test SET RECOVERY full


BACKUP DATABASE test TO DISK ='e:\backup\test.trn'
BACKUP LOG test WITH truncate_only

Now run the log backup.

BACKUP LOG test TO DISK ='e:\backup\test.trn'

Msg 4214, Level 16, State 1, Line 1
BACKUP LOG cannot be performed because there is no current database backup.
Msg 3013, Level 16, State 1, Line 1
BACKUP LOG is terminating abnormally.

Check the error log by using below command

SP_READERRORLOG 0,1,'BACKUP LOG WITH TRUNCATE_ONLY'

BACKUP LOG WITH TRUNCATE_ONLY or WITH NO_LOG is deprecated.
The simple recovery model should be used to automatically truncate the transaction log.

Note:  When you use this command you can’t do regular log backup and point in time recovery too. This command will truncate the log files (VLF) and break the log chain. To maintain the log chain again you have to run full or differential backup.

Microsoft has removed this command in SQL server 2008 onwards.

What to do in SQL server 2000 and 2005?

You can use the ‘3031’ trace flag to stop the user to perform the deprecated option.

DBCC TRACEON (3031,1)

Let's check,

BACKUP LOG test WITH TRUNCATE_ONLY

Ha...SQL server throwing the following error


Msg 8309, Level 16, State 1, Line 1
BACKUP LOG WITH TRUNCATE_ONLY or WITH NO_LOG is deprecated. The simple recovery model should be used to automatically truncate the transaction log.

Shrinking the transaction log:

      

OK, How to avoid the shrinking?

  •  Frequently run the log backup and maintain the log size as small as possible.
  •  Alter the database and point the log file to another drive/Add new drive.
  •  If your company ready to accept the data loss (minimum like 2 to 3 hours) then put the database in simple recovery mode and run the differentially backup every 2 to 3 hours.
You can manually shrink the log file by using the below command.

Important note: Shrinking the data/log file will increase the fragmentation. Try to truncate the log file instead of shrinking.
SELECT name FROM SYS.MASTER_FILES WHERE DB_NAME(database_id)='test'
Test_log is logical name of “test db”.

DBCC SHRINKFILE(Test_Log)

Sometimes your shrink command will not shrink the database. When there is an active portion of log is available OR the transaction is not closed (not yet committed).

See the Shrink result :
DbId
FileId
CurrentSize
MinimumSize
UsedPages
EstimatedPages
5
2
305760
70
305760
64


Alright the shrinking doesn't reduce the file size.


Let's check the open transaction by using DBCC OPENTRAN.You can also check the log size and status using DBCC loginfo() and sys.master_files commands.


DBCC OPENTRAN()

Transaction information for database 'test'.

Oldest active transaction:
    SPID (server process ID): 54
    UID (user ID) : -1
    Name          : INSERT
    LSN           : (197:11774:2)
    Start time    : Mar  7 2011  4:18:44:240PM
    SID           : 0x01

DBCC execution completed. If DBCC printed error messages, contact your system administrator.


Sys.master_files’ will return the size of each file.
SELECT size/128,type_desc FROM SYS.MASTER_FILES WHERE DB_NAME(database_id)='test'

'DBCC LOGINFO()' will return the status of the log file.

If the status is 2 VLF in use (Active)
If the status is 0 VLF not in use (Inactive)





Look the BOL for more about DBCC loginfo().
Then, How to shrink the  log file?


Ok...Wait until the open transaction finished.


Check the log_reuse_wait_descsys.databases by running below command and see what value it returns. It will help to minimize the database running out of space.



SELECT name, recovery_model_desc, log_reuse_wait_desc FROM sys.databases name='test'

Look the BOL for more about  log_reuse_wait_desc.
 
ALTER DATABASE [Test] SET RECOVERY SIMPLE WITH rollback immediate

DBCC SHRINKFILE(Test_Log)

ALTER DATABASE [Test] SET RECOVERY FULL WITH rollback immediate
Changing the recovery model and truncate_only both options are works same.

Note:  When you use this command you can’t do regular log backup and point in time recovery too. This command will truncate the log files (VLF) and break the log chain. To maintain the log chain again you have to run full or differential backup.


Conclusion
Shrinking the log file will reduce the physical log file size and truncating the log file will change one or more VLF files from active to inactive and reuse it again.