Changing physical locations of all DBs

I got a request this week from an application engineer. The request was to move ALL physical database files from one physical drive to another physical drive on the same machine.

Hmmmm, I have never moved ALL .mdf files from one place to another at one time. One of my teammates told the application engineer that it may take some time. He want to detach/move/attach all the DBs.

I knew this wouldn’t work because:

1) it would take forever

2) how are you going to detach the master DB?

No Detach for YOU!

So my solution contained 3 parts:

  1. Change the location of all the files inside SQL
  2. shutdown and move the files
  3. Change SQL’s startup parameters for the master database

Easy as π.

I used some good old dynamic SQL to create an ALTER DATABASE script for each db (except the master)

--first, let's just take a look and see where our files are living
SELECT name, physical_name,
CASE
WHEN TYPE = 0 THEN 'DATA'
WHEN TYPE = 1 THEN 'LOG'
END AS FileType
FROM sys.master_Files
 
--Now let's create those ALTER DB statements
SELECT 'Alter DATABASE ' + d.name + ' MODIFY FILE ( NAME = ' + f.name + ', FILENAME = ' +
 
--here is where you want to put where you want to move your files
 
' ''c:\NewFolder\'+
right(physical_name, charindex('\',reverse(physical_name))-1 ) + ''' ) '
FROM sys.databases d
inner join sys.master_files f on d.database_id = f.database_id
where 
 --remove this if you want both DB AND log file
 type = 0 
--and keep this part, dbid 1 is master
 and d.database_id <> 1
 
--Check again to see if the physical name is changed
select name, physical_name,
case
when type = 0 then 'DATA'
when type = 1 then 'LOG'
end as FileType
from sys.master_Files

After you run this, you should see this message a bunch of times:

The file “tempdev” has been modified in the system catalog. The new path will be used the next time the database is started.

Now you can shut down your SQL services and move the physical files to their new location.

Once you have all the files moved over (including the master) you’ll need to change the startup parameters.

To do this:

  1. open up the configuration manager
  2. Right click on your SQL instance –> properties
  3. Click on advanced –> startup parameters
  4. Change the location of the master .mdf
Change the location here
Once you have updated your startup parameters, you should be good to go. I want to stress “should” here because you have to make sure that your service account has access to the new location. If it doesn’t, SQL won’t start.
I tired this a couple times on my test machine and then in the production environment. I was hoping to “wow” the customer with the speed of completing this task, unfortunately their disks were really slow and moving the files took the majority of the time.

Find failed SQL jobs with powershell

This weekend I went to SQL Saturday in Auckland. It was nice to interact with other DBAs again.  Kent Chenery (blog | twitter) did a nice session on powershell for n00bs and it got me thinking that I need to start posting some of my powershell stuff up here.

When I started my current job I was given a handover document that had a list of standard operating procedures, some links to some useful info, and a “daily morning checks” list. When I first read this list I was amazed. It went something like this:

  1. Log on to server A
  2. Open up management studio
  3. Open up SQL Agent’s job
  4. Order by last outcome
  5. Rerun the jobs that failed
  6. Repeat for servers B, C, D….and on

Ouch..this was all done manually. This would take way too long to just “check” to see if these jobs failed. To fix this issue I turned to my good friend – powershell. I came up with little script to hit up all my servers, check the SQL jobs, and mail me if a job failed in the last 24 hours.

#Find Failed SQL Jobs with Powershell
#by Adam Mikolaj
#www.sqlsandwiches.com
 
 
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SMO") | out-null;
 
 
#let's get our list of servers. For this, create a .txt files with all the server names you want to check.
$sqlservers = Get-Content "C:\DevWork\scripts\computers.txt";
 
#we'll get the long date and toss that in a variable
$datefull = Get-Date
#and shorten it
$today = $datefull.ToShortDateString()
 
#let's set up the email stuff
$msg = new-object Net.Mail.MailMessage
$smtp = new-object Net.Mail.SmtpClient("ExchangeServerNameHere")
$msg.Body = “Here is a list of failed SQL Jobs for $today (the last 24 hours)”
 
 
#here, we will begin with a foreach loop. We'll be checking all servers in the .txt referenced above.
foreach($sqlserver in $sqlservers)
{
 
    #here we need to set which server we are going to check in this loop
    $srv = New-Object "Microsoft.SqlServer.Management.Smo.Server" $sqlserver;
 
        #now let's loop through all the jobs
        foreach ($job in $srv.Jobserver.Jobs)
        {
            #now we are going to set up some variables. 
            #These values come from the information in $srv.Jobserver.Jobs
            $jobName = $job.Name;
        	$jobEnabled = $job.IsEnabled;
        	$jobLastRunOutcome = $job.LastRunOutcome;
            $jobLastRun = $job.LastRunDate;
 
 
            #we are only concerned about jos that are enabled and have run before. 
            #POSH is weird with nulls so you check by just calling the var
            #if we wanted to check isnull() we would use !$jobLastRun  
            if($jobEnabled = "true" -and $jobLastRun)
                {  
                   # we need to find out how many days ago that job ran
                   $datediff = New-TimeSpan $jobLastRun $today 
                   #now we need to take the value of days in $datediff
                   $days = $datediff.days
 
 
                       #gotta check to make sure the job ran in the last 24 hours     
                       if($days -le 1 )                    
                         {       
                            #and make sure the job failed
                            IF($jobLastRunOutcome -eq "Failed")
                            {
                                #now we add the job info to our email body. use `n for a new line
                			    $msg.body   = $msg.body + "`n `n FAILED JOB INFO: 
                                 SERVER = $sqlserver 
                                 JOB = $jobName 
                                 LASTRUN = $jobLastRunOutcome
                                 LASTRUNDATE = $jobLastRun"
 
                            }    
                          } 
                }
 
 
        }
}
 
#once all that loops through and builds our $msg.body, we are read to send
 
#who is this coming from
$msg.From = "adam@sqlsandwiches.com"
#and going to
$msg.To.Add("adam@sqlsandwiches.com")
#and a nice pretty title
$msg.Subject = "FAILED SQL Jobs for $today"
#and BOOM! send that bastard!
$smtp.Send($msg)

Just set this script up to run every morning as a scheduled task and you can enjoy a nice SQL Sandwich instead of plowing through SQL agent jobs.

Next up, I’ll do the same job but use the psx extensions. Wonder which is easier…

Find…underlings…

Find…..underlings…..
searching……
underlings found……

I was recently given the task to create a stored procedure to find people who are under other people from our user’s database.

The requester wanted to put in the user’s ID and have it return everyone under them…and everyone under those people…and so on.

The table structure looked something like this:

<code>CREATE TABLE [dbo].[tblUser]( [UserID] [INT] NOT NULL, [EmployeeNum] [CHAR](7) NULL, [FirstName] [VARCHAR](20) NULL, [LastName] [VARCHAR](20) NULL, [StatusID] [INT] NULL, [ManagerEmployeeNum] [VARCHAR](20) NULL CONSTRAINT [PK_tblUser] PRIMARY KEY CLUSTERED ( [UserID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]) ON [PRIMARY] </code>

To complete this task I created a temp table, some counters, and a little while loop.

SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
-- =============================================
-- Author:		Adam Mikolaj
-- Create date: 23 June 2011
-- Description:	SP to find what people are under what people (mangers)
-- =============================================
CREATE PROCEDURE uSP_FindUnderlings
@UserID INT
 
AS
BEGIN
SET NOCOUNT ON;
 
CREATE TABLE #peopleunder(
employeenum VARCHAR(20))
 
DECLARE
@employeeNum VARCHAR(20),
@countA INT = 1, @countB INT = 0
 
SET @employeeNum = (SELECT rtrim(EmployeeNum) FROM tblUser WHERE UID = @UserID)
 
INSERT INTO #peopleunder
SELECT @employeeNum
 
while @countA &lt;&gt; @countB
	BEGIN
		SET @countA = (SELECT COUNT(*) FROM #peopleunder)
 
                INSERT INTO #peopleunder
		SELECT rtrim(EmployeeNum) AS employeenum FROM tblUser WHERE StatusID = 1
AND (ManagerEmployeeNum IN (SELECT EmployeeNum FROM #peopleunder)
AND EmployeeNum NOT IN (SELECT EmployeeNum FROM #peopleunder)) 
 
		SET @countB = (SELECT COUNT(*) FROM #peopleunder)
 
	END
SELECT * FROM tblUser WHERE EmployeeNum IN (SELECT EmployeeNum FROM #peopleunder)
AND UID &lt;&gt; @Uid AND StatusID = 1
DROP TABLE #peopleunder
END
GO

 

The basic logic behind this is using the loop to continue to insert records into the temp table until it doesn’t find anymore records. I use the counters to calculate the pre-insert and post-insert. If they match, the loop is finished.

Just a simple little SP to start my morning.

Don’t touch that Maintenance Plan Job

Good ol Maintenance plans. With 2005 and beyond we received the glorious SSIS backed Maintenance plans. I love SSIS so natually I love using these plans.

Last week I was working on setting up a new server. We had to make sure we had full, differential, and transactional log backups. No big deal. I just created one maintenance plan with 3 subplans.

I went in and created all the steps that were required. Backup DBs, clean up old ones, and send mails based on pass/failure.

The manager of this project then wanted all the backups copied over to a different machine. I didn’t need to do log shipping or mirroring or anything. The files just needed to be the same on both machines.

To do this I went into the SQL Agent and opened up the job for each of my steps. I created an extra powershell step that used robo copy to copy all of the backups after they were successfully created locally. Saved my jobs and everything was great.

yay jobs!

About half a day later, the operator I set up to receive the pass/fail emails didn’t like getting the transactional log backup emails every 30 minutes and wanted me to remove it from that subplan.

Meh – not a problemo. I went in and disabled that part of the maintenance plan. I saved my work and went on my merry way. A couple hours later, I checked to make sure everything was working correctly. I noticed that my robo copy job wasn’t copying all the new backups. What the hell….

I went back into my jobs and noticed all of my powershell jobs were gone. Poof!

After some testing I found out that when you save a maintenance plan it overwrites whatever was there before. Gotcha!

So TIP ‘o The Day: Do ALL your work for the maintenance plan INSIDE the maintenance plan.

My work around for this was to create another job just for the robo copy. I then added this job to each of the maintenance plans using the ‘Execute SQL Server Agent Job Task’.

Where the hell is that column???

“Hey Database dude, I need you to update the PDADE33e433 Table with values for columns GrAnDaMa12333 and WTF32323 with values ‘ASdjendnd’ and ‘BarfFartwekljweljk3′. Cool? Thanks.’

I work with engineers who use databases. They design all the products that my company sells. They are really good at designing but…they use databases. For them, I am a necessary evil. We all get along very well but they would rather be designing products than deal with storing information. Did I mention it’s one of me to about 300 of them?

With those odds it’s very hard to keep track of what they are doing in their databases. With supporting over 100 databases (possibly more, only on month 3) I am still finding it a hard time to try to find out how they are building these databases (that’s a whole different blog entry).

What I do know is that I get requests to change data that they don’t have access to. When this happens I have to figure out where this data is. Most of the time they tell me a server and a table. I always say “Sure, no problem, just make sure you send me an email with the details”

This is where my journey begins. To save time I have built a stored proc that I have been installing in all the master DBs of all the instances. This SP checks all columns in all databases to see if one is LIKE the keyword that I pass in. Feel free to use it. NOTE: it also gives you view names.

SET ansi_nulls ON
GO
SET quoted_identifier ON
GO
/*
-- =============================================
-- Author:    Adam Mikolaj-- Create date: 19 May 2011
-- Description:  Use this SP to search for a column in every database on your server.
-- I created this because sometimes the I have no idea what database a column I need is in. 
-- This quickly searches all your DBs
-- Sample - exec uSP_WheresThatColumn 'Birthdate'
-- =============================================
*/
 
CREATE PROCEDURE Usp_wheresthatcolumn
 
@COLUMN VARCHAR(255)AS
BEGIN
SET nocount ON;
 
CREATE TABLE #whereisit(
database_name VARCHAR(255),
TABLE_NAME VARCHAR(255),
column_name VARCHAR(255))
 
DECLARE @SQL VARCHAR(MAX)
SET @SQL =
 'insert into #whereisit
 exec sp_MSforeachdb  ''
use [?] select ''''?'''' as DATABASE_NAME , TABLE_NAME, COLUMN_NAME 
from INFORMATION_SCHEMA.COLUMNS
 where COLUMN_NAME like ''''%'+@COLUMN+'%'''' and TABLE_NAME not like ''''#%''''
 ''
  '
 
EXEC (@SQL)
SELECT * FROM #whereisit
DROP TABLE #whereisit
END
GO

To call it just type in master..uSP_WheresThatColumn ‘column name’

Enjoy!

Fixing Orphaned Users

One my favorite commands when I was started my career was Sp_changeuserslogin. I had to move a lot of databases around and I orphaned a lot of users. This command really helped me find what users were orphaned and reconnect them with the appropriate database.

When should you use this command?

This command is very useful when you have a user in your database and a login on your server that should be connected (or mapped) but are not.

To find out if you have an oprhaned users run:

EXEC Sp_change_users_login ‘report’

This will give you a list of orphaned users in your DB.

If you find you have some orphaned users you can run this command to fix them:

EXEC Sp_change_users_login
‘auto_fix’,
‘YourUser’

It’s a handy tool to use but BOL warns avoiding it because it will be going away soon. They suggest using ALTER USER instead.

ALTER USER YourUser WITH LOGIN = ‘YourUser’

Query to find Duplicate records in a column

We are going to start SQL sandwiches with a tasty little snack. I have had this asked in an interview before. “What is an easy way to find duplicate records in one table?”

The answer is quite easy yet it can stump anyone who hasn’t done it before.

It’s a simple select statement that you can use for multiple columns.

1
2
3
4
5
SELECT column1,
COUNT(column1)
FROM table1
GROUP BY column1
HAVING COUNT(column1) > 1

You can add other columns in there if you are looking for a record that has multiple columns that are the same.