Changing physical locations of all DBs

I got a request this week from an application engineer. The request was to move ALL physical database files from one physical drive to another physical drive on the same machine.

Hmmmm, I have never moved ALL .mdf files from one place to another at one time. One of my teammates told the application engineer that it may take some time. He want to detach/move/attach all the DBs.

I knew this wouldn’t work because:

1) it would take forever

2) how are you going to detach the master DB?

No Detach for YOU!

So my solution contained 3 parts:

  1. Change the location of all the files inside SQL
  2. shutdown and move the files
  3. Change SQL’s startup parameters for the master database

Easy as π.

I used some good old dynamic SQL to create an ALTER DATABASE script for each db (except the master)

--first, let's just take a look and see where our files are living
SELECT name, physical_name,
CASE
WHEN TYPE = 0 THEN 'DATA'
WHEN TYPE = 1 THEN 'LOG'
END AS FileType
FROM sys.master_Files
 
--Now let's create those ALTER DB statements
SELECT 'Alter DATABASE ' + d.name + ' MODIFY FILE ( NAME = ' + f.name + ', FILENAME = ' +
 
--here is where you want to put where you want to move your files
 
' ''c:\NewFolder\'+
right(physical_name, charindex('\',reverse(physical_name))-1 ) + ''' ) '
FROM sys.databases d
inner join sys.master_files f on d.database_id = f.database_id
where 
 --remove this if you want both DB AND log file
 type = 0 
--and keep this part, dbid 1 is master
 and d.database_id <> 1
 
--Check again to see if the physical name is changed
select name, physical_name,
case
when type = 0 then 'DATA'
when type = 1 then 'LOG'
end as FileType
from sys.master_Files

After you run this, you should see this message a bunch of times:

The file “tempdev” has been modified in the system catalog. The new path will be used the next time the database is started.

Now you can shut down your SQL services and move the physical files to their new location.

Once you have all the files moved over (including the master) you’ll need to change the startup parameters.

To do this:

  1. open up the configuration manager
  2. Right click on your SQL instance –> properties
  3. Click on advanced –> startup parameters
  4. Change the location of the master .mdf
Change the location here
Once you have updated your startup parameters, you should be good to go. I want to stress “should” here because you have to make sure that your service account has access to the new location. If it doesn’t, SQL won’t start.
I tired this a couple times on my test machine and then in the production environment. I was hoping to “wow” the customer with the speed of completing this task, unfortunately their disks were really slow and moving the files took the majority of the time.

Find failed SQL jobs with powershell

This weekend I went to SQL Saturday in Auckland. It was nice to interact with other DBAs again.  Kent Chenery (blog | twitter) did a nice session on powershell for n00bs and it got me thinking that I need to start posting some of my powershell stuff up here.

When I started my current job I was given a handover document that had a list of standard operating procedures, some links to some useful info, and a “daily morning checks” list. When I first read this list I was amazed. It went something like this:

  1. Log on to server A
  2. Open up management studio
  3. Open up SQL Agent’s job
  4. Order by last outcome
  5. Rerun the jobs that failed
  6. Repeat for servers B, C, D….and on

Ouch..this was all done manually. This would take way too long to just “check” to see if these jobs failed. To fix this issue I turned to my good friend – powershell. I came up with little script to hit up all my servers, check the SQL jobs, and mail me if a job failed in the last 24 hours.

#Find Failed SQL Jobs with Powershell
#by Adam Mikolaj
#www.sqlsandwiches.com
 
 
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SMO") | out-null;
 
 
#let's get our list of servers. For this, create a .txt files with all the server names you want to check.
$sqlservers = Get-Content "C:\DevWork\scripts\computers.txt";
 
#we'll get the long date and toss that in a variable
$datefull = Get-Date
#and shorten it
$today = $datefull.ToShortDateString()
 
#let's set up the email stuff
$msg = new-object Net.Mail.MailMessage
$smtp = new-object Net.Mail.SmtpClient("ExchangeServerNameHere")
$msg.Body = “Here is a list of failed SQL Jobs for $today (the last 24 hours)”
 
 
#here, we will begin with a foreach loop. We'll be checking all servers in the .txt referenced above.
foreach($sqlserver in $sqlservers)
{
 
    #here we need to set which server we are going to check in this loop
    $srv = New-Object "Microsoft.SqlServer.Management.Smo.Server" $sqlserver;
 
        #now let's loop through all the jobs
        foreach ($job in $srv.Jobserver.Jobs)
        {
            #now we are going to set up some variables. 
            #These values come from the information in $srv.Jobserver.Jobs
            $jobName = $job.Name;
        	$jobEnabled = $job.IsEnabled;
        	$jobLastRunOutcome = $job.LastRunOutcome;
            $jobLastRun = $job.LastRunDate;
 
 
            #we are only concerned about jos that are enabled and have run before. 
            #POSH is weird with nulls so you check by just calling the var
            #if we wanted to check isnull() we would use !$jobLastRun  
            if($jobEnabled = "true" -and $jobLastRun)
                {  
                   # we need to find out how many days ago that job ran
                   $datediff = New-TimeSpan $jobLastRun $today 
                   #now we need to take the value of days in $datediff
                   $days = $datediff.days
 
 
                       #gotta check to make sure the job ran in the last 24 hours     
                       if($days -le 1 )                    
                         {       
                            #and make sure the job failed
                            IF($jobLastRunOutcome -eq "Failed")
                            {
                                #now we add the job info to our email body. use `n for a new line
                			    $msg.body   = $msg.body + "`n `n FAILED JOB INFO: 
                                 SERVER = $sqlserver 
                                 JOB = $jobName 
                                 LASTRUN = $jobLastRunOutcome
                                 LASTRUNDATE = $jobLastRun"
 
                            }    
                          } 
                }
 
 
        }
}
 
#once all that loops through and builds our $msg.body, we are read to send
 
#who is this coming from
$msg.From = "adam@sqlsandwiches.com"
#and going to
$msg.To.Add("adam@sqlsandwiches.com")
#and a nice pretty title
$msg.Subject = "FAILED SQL Jobs for $today"
#and BOOM! send that bastard!
$smtp.Send($msg)

Just set this script up to run every morning as a scheduled task and you can enjoy a nice SQL Sandwich instead of plowing through SQL agent jobs.

Next up, I’ll do the same job but use the psx extensions. Wonder which is easier…

How to make a Big Ugly Nasty Test Table

Do you need to fill up a database with terrible data to test restore times? Do you want to see how fast you can fill up your harddrive?

 

Well try out this little bit of code on your DB and watch the MB’s get sucked up!

CREATE TABLE BigGuy (column1 CHAR(8000))
CREATE TABLE BigGuy2 (column1 CHAR(8000))
CREATE TABLE BigGuy3 (column1 CHAR(8000))
CREATE TABLE BigGuy4 (column1 CHAR(8000))
CREATE TABLE BigGuy5 (column1 CHAR(8000))
DECLARE @counta INT = 1 while @counta <= 1000000000 
 BEGIN
INSERT INTO BigGuy VALUES('test')
 INSERT INTO BigGuy2 VALUES('test')
 INSERT INTO BigGuy3 VALUES('test') 
 INSERT INTO BigGuy4 VALUES('test') 
 INSERT INTO BigGuy5 VALUES('test') 
 SET @counta = @counta +1 
END

This should suck up about a GB a minute. Want to suck up more space? Add more tables and make sure your database is set to FULL recovery model.

WARNING! – This is NOT A GOOD IDEA and should ONLY be used in controlled environments FOR TESTING PURPOSES

Find…underlings…

Find…..underlings…..
searching……
underlings found……

I was recently given the task to create a stored procedure to find people who are under other people from our user’s database.

The requester wanted to put in the user’s ID and have it return everyone under them…and everyone under those people…and so on.

The table structure looked something like this:

<code>CREATE TABLE [dbo].[tblUser]( [UserID] [INT] NOT NULL, [EmployeeNum] [CHAR](7) NULL, [FirstName] [VARCHAR](20) NULL, [LastName] [VARCHAR](20) NULL, [StatusID] [INT] NULL, [ManagerEmployeeNum] [VARCHAR](20) NULL CONSTRAINT [PK_tblUser] PRIMARY KEY CLUSTERED ( [UserID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]) ON [PRIMARY] </code>

To complete this task I created a temp table, some counters, and a little while loop.

SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
-- =============================================
-- Author:		Adam Mikolaj
-- Create date: 23 June 2011
-- Description:	SP to find what people are under what people (mangers)
-- =============================================
CREATE PROCEDURE uSP_FindUnderlings
@UserID INT
 
AS
BEGIN
SET NOCOUNT ON;
 
CREATE TABLE #peopleunder(
employeenum VARCHAR(20))
 
DECLARE
@employeeNum VARCHAR(20),
@countA INT = 1, @countB INT = 0
 
SET @employeeNum = (SELECT rtrim(EmployeeNum) FROM tblUser WHERE UID = @UserID)
 
INSERT INTO #peopleunder
SELECT @employeeNum
 
while @countA &lt;&gt; @countB
	BEGIN
		SET @countA = (SELECT COUNT(*) FROM #peopleunder)
 
                INSERT INTO #peopleunder
		SELECT rtrim(EmployeeNum) AS employeenum FROM tblUser WHERE StatusID = 1
AND (ManagerEmployeeNum IN (SELECT EmployeeNum FROM #peopleunder)
AND EmployeeNum NOT IN (SELECT EmployeeNum FROM #peopleunder)) 
 
		SET @countB = (SELECT COUNT(*) FROM #peopleunder)
 
	END
SELECT * FROM tblUser WHERE EmployeeNum IN (SELECT EmployeeNum FROM #peopleunder)
AND UID &lt;&gt; @Uid AND StatusID = 1
DROP TABLE #peopleunder
END
GO

 

The basic logic behind this is using the loop to continue to insert records into the temp table until it doesn’t find anymore records. I use the counters to calculate the pre-insert and post-insert. If they match, the loop is finished.

Just a simple little SP to start my morning.

Don’t touch that Maintenance Plan Job

Good ol Maintenance plans. With 2005 and beyond we received the glorious SSIS backed Maintenance plans. I love SSIS so natually I love using these plans.

Last week I was working on setting up a new server. We had to make sure we had full, differential, and transactional log backups. No big deal. I just created one maintenance plan with 3 subplans.

I went in and created all the steps that were required. Backup DBs, clean up old ones, and send mails based on pass/failure.

The manager of this project then wanted all the backups copied over to a different machine. I didn’t need to do log shipping or mirroring or anything. The files just needed to be the same on both machines.

To do this I went into the SQL Agent and opened up the job for each of my steps. I created an extra powershell step that used robo copy to copy all of the backups after they were successfully created locally. Saved my jobs and everything was great.

yay jobs!

About half a day later, the operator I set up to receive the pass/fail emails didn’t like getting the transactional log backup emails every 30 minutes and wanted me to remove it from that subplan.

Meh – not a problemo. I went in and disabled that part of the maintenance plan. I saved my work and went on my merry way. A couple hours later, I checked to make sure everything was working correctly. I noticed that my robo copy job wasn’t copying all the new backups. What the hell….

I went back into my jobs and noticed all of my powershell jobs were gone. Poof!

After some testing I found out that when you save a maintenance plan it overwrites whatever was there before. Gotcha!

So TIP ‘o The Day: Do ALL your work for the maintenance plan INSIDE the maintenance plan.

My work around for this was to create another job just for the robo copy. I then added this job to each of the maintenance plans using the ‘Execute SQL Server Agent Job Task’.

Recovery Model for Every Database

One thing that is enjoyable about my position is working with people who have no idea what SQL is besides “a difficult version of excel”. This gives me the chance to use what I have learned over the last couple of years and explain it the people who are using it in easy digestable….sandwiches, if you will.
One thing I see lot of is FULL recovery mode with no log backups. Sometimes these databases get tossed into prodution environments by third party tools that have full admin rights. Being I haven’t been here that long, I am still working on locking down all the production servers. So I needed a quick way to find which DBs were in SIMPLE and which DBs were in FULL.

So I ran into a very very cool function recently. DATABASEPROPERTYEX. If you ask this dude the status of a database, he’ll give it to you. Want to know the version? Yup, he can do that too. For the full list, check out – BOL.
I used this guy to tell me the recovery model of every DB on that server.

Check it out:

1
2
SELECT [name] AS dbname,Databasepropertyex([name], 'Recovery') AS recoverymodel
FROM MASTER.dbo.sysdatabases

Team that up with registered servers and you can find out the recovery model of every DB on all your servers!

Where the hell is that column???

“Hey Database dude, I need you to update the PDADE33e433 Table with values for columns GrAnDaMa12333 and WTF32323 with values ‘ASdjendnd’ and ‘BarfFartwekljweljk3′. Cool? Thanks.’

I work with engineers who use databases. They design all the products that my company sells. They are really good at designing but…they use databases. For them, I am a necessary evil. We all get along very well but they would rather be designing products than deal with storing information. Did I mention it’s one of me to about 300 of them?

With those odds it’s very hard to keep track of what they are doing in their databases. With supporting over 100 databases (possibly more, only on month 3) I am still finding it a hard time to try to find out how they are building these databases (that’s a whole different blog entry).

What I do know is that I get requests to change data that they don’t have access to. When this happens I have to figure out where this data is. Most of the time they tell me a server and a table. I always say “Sure, no problem, just make sure you send me an email with the details”

This is where my journey begins. To save time I have built a stored proc that I have been installing in all the master DBs of all the instances. This SP checks all columns in all databases to see if one is LIKE the keyword that I pass in. Feel free to use it. NOTE: it also gives you view names.

SET ansi_nulls ON
GO
SET quoted_identifier ON
GO
/*
-- =============================================
-- Author:    Adam Mikolaj-- Create date: 19 May 2011
-- Description:  Use this SP to search for a column in every database on your server.
-- I created this because sometimes the I have no idea what database a column I need is in. 
-- This quickly searches all your DBs
-- Sample - exec uSP_WheresThatColumn 'Birthdate'
-- =============================================
*/
 
CREATE PROCEDURE Usp_wheresthatcolumn
 
@COLUMN VARCHAR(255)AS
BEGIN
SET nocount ON;
 
CREATE TABLE #whereisit(
database_name VARCHAR(255),
TABLE_NAME VARCHAR(255),
column_name VARCHAR(255))
 
DECLARE @SQL VARCHAR(MAX)
SET @SQL =
 'insert into #whereisit
 exec sp_MSforeachdb  ''
use [?] select ''''?'''' as DATABASE_NAME , TABLE_NAME, COLUMN_NAME 
from INFORMATION_SCHEMA.COLUMNS
 where COLUMN_NAME like ''''%'+@COLUMN+'%'''' and TABLE_NAME not like ''''#%''''
 ''
  '
 
EXEC (@SQL)
SELECT * FROM #whereisit
DROP TABLE #whereisit
END
GO

To call it just type in master..uSP_WheresThatColumn ‘column name’

Enjoy!

Adding domain users with TRY CATCH

I was recently given a task to give 20 people read permissions on a dev server. My organization uses AD groups to manage these on the production side of things but on the dev side, it’s a bit like the wild west.

I wasn’t sure if these users were already in the database or if they were new. I figured this was a good time to play around and create some sort of auto inserter for me. I wanted to be able to put the person’s user name in and wanted it to return if they were in the system or not. If they were, great, add them to this DB, if not, add them to the server and then DB.

I ran into the situation where there was a name on the list of people I had who’s username was wrong. Well…how can I check AD to make sure they were there without it blowing up? I couldn’t find a solution so I decided to use TRY…CATCH and grab the error.

Before anyone says anything about automatically entering users, this was just for my personal use and is not intended to be placed in any application.

Check it out:

USE MASTER
SET ansi_nulls ON
GO
SET quoted_identifier ON
GO
-- =============================================
 
-- Author:    Adam Mikolaj-- Create date: 12 May 2011-- Description:  Little stored procedure for entering in users
 
-- =============================================
 
CREATE PROCEDURE uSP_AutIinserter
 
 @USER VARCHAR (100),
 
@dbname VARCHAR(100),
 
@DOMAIN VARCHAR(100)
 
AS
BEGIN
 
DECLARE @LiveUser BIT,@Sqls VARCHAR(500)
 
IF EXISTS (SELECT *FROM syslogins
 
WHERE name LIKE '%' + @USER + '%')
 
BEGIN
 
SET @LiveUser = 1
 
END
 
ELSE
 
SET @LiveUser = 0
 
IF @LiveUser = 0
 
BEGIN
 
SET @Sqls = 'CREATE LOGIN [' + @DOMAIN + '\' + @user +'] 
 
FROM WINDOWS WITH DEFAULT_DATABASE=' +@DBName+''
 
BEGIN TRY
 
EXEC (@Sqls)
 
END TRY
 
BEGIN CATCH
 
SELECT Error_message() AS errormessage;
 
END CATCH
PRINT 'USER IS now IN the system' END
ELSE
PRINT 'USER already IN the system'END
 
GO

To call this procedure just do this:

uSP_AutoInserter ‘Dude’, ‘DB’, ‘domain’

Backing up all DBs using sp_msforeachdb

This will be the first in a series of post on a couple of my new favorite undocumented commands. I’m talking about sp_msforeachdb and sp_msforeachtable. These guys are awesome.

Recently I needed to backup all the databases on a SQL 2000 server. I could have created a maintenance plan and just used it as a one-off but I figured I’d get a little creative. I used sp_msforeachdb to back each DB up.

Check it out:

1
2
3
4
5
6
7
8
9
10
11
12
13
EXEC Sp_msforeachdb
' IF DB_ID(''?'') &gt; 4
 Begin
BACKUP DATABASE [?] TO  DISK = N''\\server\Backups\?.BAK'' WITH NOFORMAT,
 INIT,  NAME = N''?-Full Database Backup'',SKIP, NOREWIND, NOUNLOAD
declare @backupSetId as int
 select @backupSetId = position from msdb..backupset where database_name=N''?''
and backup_set_id=(select max(backup_set_id)
from msdb..backupset  where database_name=N''?'' )
if @backupSetId is null begin raiserror(N''Verify failed. Backup information for database ''''?'''' not found.'', 16, 1)
end
RESTORE VERIFYONLY FROM  DISK = N''\\server\Backups\?.BAK''
WITH  FILE = @backupSetId,  NOUNLOAD,  NOREWIND End '

Managing your SQL Server logs

“I am getting an error. Something’s broke. My application works fine. It’s the database.”

We’ve heard that guy more than once. So what do you do? I usually open up my box of SQLs and check out the logs.

This is usually a pretty simple process. Open the management folder, open the SQL Server Logs folder, double click your log. It’s a nice place to start.

Now what happens when you have 500,000 records in there? It takes FOREVER!

Luckily SQL 2011 (Denali) has a feature in there to stop the whole log from loading. If you aren’t on SQL 2011 (Denali) then you have to sit there and wait…and wait…and wait…

I recently found this beauty – EXEC MASTER.sys.Sp_cycle_errorlog

This boots your current log to second place and creates a new one. Now you have a nice clean fresh log to fill up.

So what happens to the logs? Well the SQL default is to keeps 6. Almost every time you open up that folder, you see Current + 6 old ones. After 6, they are recycled.

You are able to configure the amount of logs you keep too, between 6 and 99. Just right click on the ‘SQL Server Logs’ –> Configure.

SQL creates a new log every time the server is restarted. For most people, this is way too far apart. On my servers, I set up a weekly job that run this command. That way I know I have 1 week worth of data in each of my logs.