SQL Saturday #250

I’m currently helping to plan Pittsburgh’s next SQL Saturday. It should be a good day with lots of good sessions by some new (see Steve Hood‘s post here) and some returning speakers.

I’m personally looking forward to the session on Statistical Semantic Search. It’s something I haven’t used before.

Here is the current brochure. If you haven’t signed up yet, what are you waiting for?!

Welcome to SQLSaturday #250 in Pittsburgh


SQLSaturday is a full day FREE  training event for SQL Server professionals and those wanting to learn more about SQL Server and network with the SQL Server community. Our event will be held Saturday September 14, 2013 at ITT Technical Institute, 5460 Campbells Run Road, Pittsburgh, PA 15205. Admittance to this event is free, but we charge a fee of $10.00 to help defer the cost of lunch, coffee/tea  and water which will be provided to attendees throughout the day. 

Pick from more than 30 different sessions and learn from the experts!! Check the schedule to take a peek at the sessions and the speakers. Some of our sponsors are interested in hiring. Don’t forget to stop by at the sponsors table during the event.
Please register soon as seating is limited, and let friends and colleages know about the event.

 New this year !! An entire track for “Wanna be DBAs”. Don’t know anything about SQL Server? This track is for you !! Explore the life of a DBA and see if it can be yours too. This is mainly focused towards students and novices. Once you complete your registration at http://www.sqlsaturday.com/250/eventhome.aspx . Register via eventbrite at https://wannabeadba.eventbrite.com/  also( only if you want to attend the “Wanna Be a DBA” track).

Check “Track 0” under the schedule http://sqlsaturday.com/250/schedule.aspx  for session details

Don’t forget to follow us on Twitter @SqlSaturdayPitt and join the conversation with the #SqlSat250 hashtag

PreConference Training (Optional and prepaid) – Friday Sept 13, 2013

Introduction to Performance Tuning in SQL Server 2012 presented by John Sterrett and Konstantin Melamud
Have you always wanted to dig into performance tuning but wasn’t sure where to start? During this all day pre-con you will be able to find out which SQL statements are causing your performance problems and you will be able to apply basic concepts to improve your general performance. You will also get all the free tools and scripts the pros use to to their systems.

Eventbrite - Introduction to Performance Tuning with SQL Server 2012

Changing physical locations of all DBs

I got a request this week from an application engineer. The request was to move ALL physical database files from one physical drive to another physical drive on the same machine.

Hmmmm, I have never moved ALL .mdf files from one place to another at one time. One of my teammates told the application engineer that it may take some time. He want to detach/move/attach all the DBs.

I knew this wouldn’t work because:

1) it would take forever

2) how are you going to detach the master DB?

No Detach for YOU!

So my solution contained 3 parts:

  1. Change the location of all the files inside SQL
  2. shutdown and move the files
  3. Change SQL’s startup parameters for the master database

Easy as π.

I used some good old dynamic SQL to create an ALTER DATABASE script for each db (except the master)

--first, let's just take a look and see where our files are living
SELECT name, physical_name,
END AS FileType
FROM sys.master_Files
--Now let's create those ALTER DB statements
SELECT 'Alter DATABASE ' + d.name + ' MODIFY FILE ( NAME = ' + f.name + ', FILENAME = ' +
--here is where you want to put where you want to move your files
' ''c:\NewFolder\'+
right(physical_name, charindex('\',reverse(physical_name))-1 ) + ''' ) '
FROM sys.databases d
inner join sys.master_files f on d.database_id = f.database_id
 --remove this if you want both DB AND log file
 type = 0 
--and keep this part, dbid 1 is master
 and d.database_id <> 1
--Check again to see if the physical name is changed
select name, physical_name,
when type = 0 then 'DATA'
when type = 1 then 'LOG'
end as FileType
from sys.master_Files

After you run this, you should see this message a bunch of times:

The file “tempdev” has been modified in the system catalog. The new path will be used the next time the database is started.

Now you can shut down your SQL services and move the physical files to their new location.

Once you have all the files moved over (including the master) you’ll need to change the startup parameters.

To do this:

  1. open up the configuration manager
  2. Right click on your SQL instance –> properties
  3. Click on advanced –> startup parameters
  4. Change the location of the master .mdf
Change the location here
Once you have updated your startup parameters, you should be good to go. I want to stress “should” here because you have to make sure that your service account has access to the new location. If it doesn’t, SQL won’t start.
I tired this a couple times on my test machine and then in the production environment. I was hoping to “wow” the customer with the speed of completing this task, unfortunately their disks were really slow and moving the files took the majority of the time.

Find failed SQL jobs with powershell

This weekend I went to SQL Saturday in Auckland. It was nice to interact with other DBAs again.  Kent Chenery (blog | twitter) did a nice session on powershell for n00bs and it got me thinking that I need to start posting some of my powershell stuff up here.

When I started my current job I was given a handover document that had a list of standard operating procedures, some links to some useful info, and a “daily morning checks” list. When I first read this list I was amazed. It went something like this:

  1. Log on to server A
  2. Open up management studio
  3. Open up SQL Agent’s job
  4. Order by last outcome
  5. Rerun the jobs that failed
  6. Repeat for servers B, C, D….and on

Ouch..this was all done manually. This would take way too long to just “check” to see if these jobs failed. To fix this issue I turned to my good friend – powershell. I came up with little script to hit up all my servers, check the SQL jobs, and mail me if a job failed in the last 24 hours.

#Find Failed SQL Jobs with Powershell
#by Adam Mikolaj
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SMO") | out-null;
#let's get our list of servers. For this, create a .txt files with all the server names you want to check.
$sqlservers = Get-Content "C:\DevWork\scripts\computers.txt";
#we'll get the long date and toss that in a variable
$datefull = Get-Date
#and shorten it
$today = $datefull.ToShortDateString()
#let's set up the email stuff
$msg = new-object Net.Mail.MailMessage
$smtp = new-object Net.Mail.SmtpClient("ExchangeServerNameHere")
$msg.Body = “Here is a list of failed SQL Jobs for $today (the last 24 hours)”
#here, we will begin with a foreach loop. We'll be checking all servers in the .txt referenced above.
foreach($sqlserver in $sqlservers)
    #here we need to set which server we are going to check in this loop
    $srv = New-Object "Microsoft.SqlServer.Management.Smo.Server" $sqlserver;
        #now let's loop through all the jobs
        foreach ($job in $srv.Jobserver.Jobs)
            #now we are going to set up some variables. 
            #These values come from the information in $srv.Jobserver.Jobs
            $jobName = $job.Name;
        	$jobEnabled = $job.IsEnabled;
        	$jobLastRunOutcome = $job.LastRunOutcome;
            $jobLastRun = $job.LastRunDate;
            #we are only concerned about jos that are enabled and have run before. 
            #POSH is weird with nulls so you check by just calling the var
            #if we wanted to check isnull() we would use !$jobLastRun  
            if($jobEnabled = "true" -and $jobLastRun)
                   # we need to find out how many days ago that job ran
                   $datediff = New-TimeSpan $jobLastRun $today 
                   #now we need to take the value of days in $datediff
                   $days = $datediff.days
                       #gotta check to make sure the job ran in the last 24 hours     
                       if($days -le 1 )                    
                            #and make sure the job failed
                            IF($jobLastRunOutcome -eq "Failed")
                                #now we add the job info to our email body. use `n for a new line
                			    $msg.body   = $msg.body + "`n `n FAILED JOB INFO: 
                                 SERVER = $sqlserver 
                                 JOB = $jobName 
                                 LASTRUN = $jobLastRunOutcome
                                 LASTRUNDATE = $jobLastRun"
#once all that loops through and builds our $msg.body, we are read to send
#who is this coming from
$msg.From = "adam@sqlsandwiches.com"
#and going to
#and a nice pretty title
$msg.Subject = "FAILED SQL Jobs for $today"
#and BOOM! send that bastard!

Just set this script up to run every morning as a scheduled task and you can enjoy a nice SQL Sandwich instead of plowing through SQL agent jobs.

Next up, I’ll do the same job but use the psx extensions. Wonder which is easier…

How to make a Big Ugly Nasty Test Table

Do you need to fill up a database with terrible data to test restore times? Do you want to see how fast you can fill up your harddrive?


Well try out this little bit of code on your DB and watch the MB’s get sucked up!

CREATE TABLE BigGuy (column1 CHAR(8000))
CREATE TABLE BigGuy2 (column1 CHAR(8000))
CREATE TABLE BigGuy3 (column1 CHAR(8000))
CREATE TABLE BigGuy4 (column1 CHAR(8000))
CREATE TABLE BigGuy5 (column1 CHAR(8000))
DECLARE @counta INT = 1 while @counta <= 1000000000 
 INSERT INTO BigGuy3 VALUES('test') 
 INSERT INTO BigGuy4 VALUES('test') 
 INSERT INTO BigGuy5 VALUES('test') 
 SET @counta = @counta +1 

This should suck up about a GB a minute. Want to suck up more space? Add more tables and make sure your database is set to FULL recovery model.

WARNING! – This is NOT A GOOD IDEA and should ONLY be used in controlled environments FOR TESTING PURPOSES

So much Internet…


When it comes to the SQLs, I know only a fragment of what I could. To fill in those gaps, I read. Oh, do I read. I’ve only read a few SQL books but I have read hundreds of articles and blogs.

Most of the time it goes like this:

“Hmmmm, that looks like an interesting article. Let me give it a nice ‘ol read”

“Oh…what’s this link? To another blog? Sure! Why not?!”

time to reeeeaadddd

Then I must pull myself away and jump back into the databases.

I like blogs that stick to material I am looking for while also keeping my attention. At the same time, I try to avoid blogs that get too personal or complain too much. I keep my personal blog personal and my technical blog focused on the SQLs. I like to read blogs that try to do the same.

Here is my list of the Top 10 blogs I prefer (in no order):

  1. Grumpy Old DBA – the name says it all
  2. DBA Rant – good stories here
  3. SqlBalls
  4. SQL Rockstar
  5. Shaun J Stuart – my recent favorite. Not a ton of updates but great posts
  6. Pinal Dave – Used to be my favorite but there is such a thing as too much content – it’s hard to keep up
  7. Happy SysAdmin – Great for learning some Admin skillz
  8. Paul White – Wowzer. This guys knows a lot.
  9. sqlchicken
  10. SQL Princess – She loves some CTE’s
I like bloggers who I can relate to. Many times they have the same problems I have and/or present alternative solutions.

The last thing I would like to say to all these blogs is – thanks! You’ve helped me a lot.

Do you read blogs? If so, which ones?



underlings found……

I was recently given the task to create a stored procedure to find people who are under other people from our user’s database.

The requester wanted to put in the user’s ID and have it return everyone under them…and everyone under those people…and so on.

The table structure looked something like this:


To complete this task I created a temp table, some counters, and a little while loop.

-- =============================================
-- Author:		Adam Mikolaj
-- Create date: 23 June 2011
-- Description:	SP to find what people are under what people (mangers)
-- =============================================
CREATE TABLE #peopleunder(
employeenum VARCHAR(20))
@employeeNum VARCHAR(20),
@countA INT = 1, @countB INT = 0
SET @employeeNum = (SELECT rtrim(EmployeeNum) FROM tblUser WHERE UID = @UserID)
INSERT INTO #peopleunder
SELECT @employeeNum
while @countA &lt;&gt; @countB
		SET @countA = (SELECT COUNT(*) FROM #peopleunder)
                INSERT INTO #peopleunder
		SELECT rtrim(EmployeeNum) AS employeenum FROM tblUser WHERE StatusID = 1
AND (ManagerEmployeeNum IN (SELECT EmployeeNum FROM #peopleunder)
AND EmployeeNum NOT IN (SELECT EmployeeNum FROM #peopleunder)) 
		SET @countB = (SELECT COUNT(*) FROM #peopleunder)
SELECT * FROM tblUser WHERE EmployeeNum IN (SELECT EmployeeNum FROM #peopleunder)
AND UID &lt;&gt; @Uid AND StatusID = 1
DROP TABLE #peopleunder


The basic logic behind this is using the loop to continue to insert records into the temp table until it doesn’t find anymore records. I use the counters to calculate the pre-insert and post-insert. If they match, the loop is finished.

Just a simple little SP to start my morning.

Don’t touch that Maintenance Plan Job

Good ol Maintenance plans. With 2005 and beyond we received the glorious SSIS backed Maintenance plans. I love SSIS so natually I love using these plans.

Last week I was working on setting up a new server. We had to make sure we had full, differential, and transactional log backups. No big deal. I just created one maintenance plan with 3 subplans.

I went in and created all the steps that were required. Backup DBs, clean up old ones, and send mails based on pass/failure.

The manager of this project then wanted all the backups copied over to a different machine. I didn’t need to do log shipping or mirroring or anything. The files just needed to be the same on both machines.

To do this I went into the SQL Agent and opened up the job for each of my steps. I created an extra powershell step that used robo copy to copy all of the backups after they were successfully created locally. Saved my jobs and everything was great.

yay jobs!

About half a day later, the operator I set up to receive the pass/fail emails didn’t like getting the transactional log backup emails every 30 minutes and wanted me to remove it from that subplan.

Meh – not a problemo. I went in and disabled that part of the maintenance plan. I saved my work and went on my merry way. A couple hours later, I checked to make sure everything was working correctly. I noticed that my robo copy job wasn’t copying all the new backups. What the hell….

I went back into my jobs and noticed all of my powershell jobs were gone. Poof!

After some testing I found out that when you save a maintenance plan it overwrites whatever was there before. Gotcha!

So TIP ‘o The Day: Do ALL your work for the maintenance plan INSIDE the maintenance plan.

My work around for this was to create another job just for the robo copy. I then added this job to each of the maintenance plans using the ‘Execute SQL Server Agent Job Task’.

Where the hell is that column???

“Hey Database dude, I need you to update the PDADE33e433 Table with values for columns GrAnDaMa12333 and WTF32323 with values ‘ASdjendnd’ and ‘BarfFartwekljweljk3′. Cool? Thanks.’

I work with engineers who use databases. They design all the products that my company sells. They are really good at designing but…they use databases. For them, I am a necessary evil. We all get along very well but they would rather be designing products than deal with storing information. Did I mention it’s one of me to about 300 of them?

With those odds it’s very hard to keep track of what they are doing in their databases. With supporting over 100 databases (possibly more, only on month 3) I am still finding it a hard time to try to find out how they are building these databases (that’s a whole different blog entry).

What I do know is that I get requests to change data that they don’t have access to. When this happens I have to figure out where this data is. Most of the time they tell me a server and a table. I always say “Sure, no problem, just make sure you send me an email with the details”

This is where my journey begins. To save time I have built a stored proc that I have been installing in all the master DBs of all the instances. This SP checks all columns in all databases to see if one is LIKE the keyword that I pass in. Feel free to use it. NOTE: it also gives you view names.

SET ansi_nulls ON
SET quoted_identifier ON
-- =============================================
-- Author:    Adam Mikolaj-- Create date: 19 May 2011
-- Description:  Use this SP to search for a column in every database on your server.
-- I created this because sometimes the I have no idea what database a column I need is in. 
-- This quickly searches all your DBs
-- Sample - exec uSP_WheresThatColumn 'Birthdate'
-- =============================================
CREATE PROCEDURE Usp_wheresthatcolumn
SET nocount ON;
CREATE TABLE #whereisit(
database_name VARCHAR(255),
column_name VARCHAR(255))
 'insert into #whereisit
 exec sp_MSforeachdb  ''
use [?] select ''''?'''' as DATABASE_NAME , TABLE_NAME, COLUMN_NAME 
 where COLUMN_NAME like ''''%'+@COLUMN+'%'''' and TABLE_NAME not like ''''#%''''
SELECT * FROM #whereisit
DROP TABLE #whereisit

To call it just type in master..uSP_WheresThatColumn ‘column name’


Adding domain users with TRY CATCH

I was recently given a task to give 20 people read permissions on a dev server. My organization uses AD groups to manage these on the production side of things but on the dev side, it’s a bit like the wild west.

I wasn’t sure if these users were already in the database or if they were new. I figured this was a good time to play around and create some sort of auto inserter for me. I wanted to be able to put the person’s user name in and wanted it to return if they were in the system or not. If they were, great, add them to this DB, if not, add them to the server and then DB.

I ran into the situation where there was a name on the list of people I had who’s username was wrong. Well…how can I check AD to make sure they were there without it blowing up? I couldn’t find a solution so I decided to use TRY…CATCH and grab the error.

Before anyone says anything about automatically entering users, this was just for my personal use and is not intended to be placed in any application.

Check it out:

SET ansi_nulls ON
SET quoted_identifier ON
-- =============================================
-- Author:    Adam Mikolaj-- Create date: 12 May 2011-- Description:  Little stored procedure for entering in users
-- =============================================
@dbname VARCHAR(100),
DECLARE @LiveUser BIT,@Sqls VARCHAR(500)
WHERE name LIKE '%' + @USER + '%')
SET @LiveUser = 1
SET @LiveUser = 0
IF @LiveUser = 0
SET @Sqls = 'CREATE LOGIN [' + @DOMAIN + '\' + @user +'] 
EXEC (@Sqls)
SELECT Error_message() AS errormessage;
PRINT 'USER IS now IN the system' END
PRINT 'USER already IN the system'END

To call this procedure just do this:

uSP_AutoInserter ‘Dude’, ‘DB’, ‘domain’

Managing your SQL Server logs

“I am getting an error. Something’s broke. My application works fine. It’s the database.”

We’ve heard that guy more than once. So what do you do? I usually open up my box of SQLs and check out the logs.

This is usually a pretty simple process. Open the management folder, open the SQL Server Logs folder, double click your log. It’s a nice place to start.

Now what happens when you have 500,000 records in there? It takes FOREVER!

Luckily SQL 2011 (Denali) has a feature in there to stop the whole log from loading. If you aren’t on SQL 2011 (Denali) then you have to sit there and wait…and wait…and wait…

I recently found this beauty – EXEC MASTER.sys.Sp_cycle_errorlog

This boots your current log to second place and creates a new one. Now you have a nice clean fresh log to fill up.

So what happens to the logs? Well the SQL default is to keeps 6. Almost every time you open up that folder, you see Current + 6 old ones. After 6, they are recycled.

You are able to configure the amount of logs you keep too, between 6 and 99. Just right click on the ‘SQL Server Logs’ –> Configure.

SQL creates a new log every time the server is restarted. For most people, this is way too far apart. On my servers, I set up a weekly job that run this command. That way I know I have 1 week worth of data in each of my logs.