SQL Saturday #250

I’m currently helping to plan Pittsburgh’s next SQL Saturday. It should be a good day with lots of good sessions by some new (see Steve Hood‘s post here) and some returning speakers.

I’m personally looking forward to the session on Statistical Semantic Search. It’s something I haven’t used before.

Here is the current brochure. If you haven’t signed up yet, what are you waiting for?!

Welcome to SQLSaturday #250 in Pittsburgh

http://www.sqlsaturday.com/250/

SQLSaturday is a full day FREE  training event for SQL Server professionals and those wanting to learn more about SQL Server and network with the SQL Server community. Our event will be held Saturday September 14, 2013 at ITT Technical Institute, 5460 Campbells Run Road, Pittsburgh, PA 15205. Admittance to this event is free, but we charge a fee of $10.00 to help defer the cost of lunch, coffee/tea  and water which will be provided to attendees throughout the day. 

Pick from more than 30 different sessions and learn from the experts!! Check the schedule to take a peek at the sessions and the speakers. Some of our sponsors are interested in hiring. Don’t forget to stop by at the sponsors table during the event.
Please register soon as seating is limited, and let friends and colleages know about the event.

 New this year !! An entire track for “Wanna be DBAs”. Don’t know anything about SQL Server? This track is for you !! Explore the life of a DBA and see if it can be yours too. This is mainly focused towards students and novices. Once you complete your registration at http://www.sqlsaturday.com/250/eventhome.aspx . Register via eventbrite at https://wannabeadba.eventbrite.com/  also( only if you want to attend the “Wanna Be a DBA” track).

Check “Track 0″ under the schedule http://sqlsaturday.com/250/schedule.aspx  for session details

Don’t forget to follow us on Twitter @SqlSaturdayPitt and join the conversation with the #SqlSat250 hashtag



PreConference Training (Optional and prepaid) – Friday Sept 13, 2013

Introduction to Performance Tuning in SQL Server 2012 presented by John Sterrett and Konstantin Melamud
Have you always wanted to dig into performance tuning but wasn’t sure where to start? During this all day pre-con you will be able to find out which SQL statements are causing your performance problems and you will be able to apply basic concepts to improve your general performance. You will also get all the free tools and scripts the pros use to to their systems.

Eventbrite - Introduction to Performance Tuning with SQL Server 2012

Share

Tempdb Destroyer

One of my previous posts, I covered how to create a really big database with just garbage data in it. Recently, I was given a task to blow up tempdb. I thought about using a similar script and tried it out.

Of course it worked and it blew up TempDb but it also errored out. This wasn’t acceptable. I needed to find a way to blow up the tempdb with out it erroring out.

I tried a couple little things but ended up using the following script:

SET nocount ON
 
USE tempdb
 
IF OBJECT_ID('tempdb..#freespace') IS NOT NULL
DROP TABLE #freespace
IF OBJECT_ID('tempdb..#BigGuy1') IS NOT NULL
DROP TABLE #BigGuy1
IF OBJECT_ID('tempdb..#BigGuy2') IS NOT NULL
DROP TABLE #BigGuy2
IF OBJECT_ID('tempdb..#BigGuy3') IS NOT NULL
DROP TABLE #BigGuy3
IF OBJECT_ID('tempdb..#BigGuy4') IS NOT NULL
DROP TABLE #BigGuy4
IF OBJECT_ID('tempdb..#BigGuy5') IS NOT NULL
DROP TABLE #BigGuy5
GO
 
CREATE TABLE #BigGuy1 ( column1 CHAR(7100), column2 CHAR(900))
CREATE TABLE #BigGuy2 ( column1 CHAR(7100), column2 CHAR(900))
CREATE TABLE #BigGuy3 ( column1 CHAR(7100), column2 CHAR(900))
CREATE TABLE #BigGuy4 ( column1 CHAR(7100), column2 CHAR(900))
CREATE TABLE #BigGuy5 ( column1 CHAR(7100), column2 CHAR(900))
 
CREATE TABLE #freespace
( drive VARCHAR(1),
[MB Free] INT)
 
DECLARE @drive VARCHAR(1),
@free INT
 
SELECT @drive = LEFT(physical_name, 1)
FROM sys.master_files
WHERE database_id = DB_ID('tempdb')
AND TYPE = 0
 
INSERT INTO #freespace
EXEC master..xp_fixeddrives
 
SELECT @free = [MB Free]
FROM #freespace
WHERE drive = @drive
 
while @free > 1
BEGIN
INSERT INTO #BigGuy1 VALUES('test', 'test')
INSERT INTO #BigGuy2 VALUES('test', 'test')
INSERT INTO #BigGuy3 VALUES('test', 'test')
INSERT INTO #BigGuy4 VALUES('test', 'test')
INSERT INTO #BigGuy5 VALUES('test', 'test')
INSERT INTO #BigGuy1 VALUES('test', 'test')
INSERT INTO #BigGuy2 VALUES('test', 'test')
INSERT INTO #BigGuy3 VALUES('test', 'test')
INSERT INTO #BigGuy4 VALUES('test', 'test')
INSERT INTO #BigGuy5 VALUES('test', 'test')
INSERT INTO #BigGuy1 VALUES('test', 'test')
INSERT INTO #BigGuy2 VALUES('test', 'test')
INSERT INTO #BigGuy3 VALUES('test', 'test')
INSERT INTO #BigGuy4 VALUES('test', 'test')
INSERT INTO #BigGuy5 VALUES('test', 'test')
INSERT INTO #BigGuy1 VALUES('test', 'test')
INSERT INTO #BigGuy2 VALUES('test', 'test')
INSERT INTO #BigGuy3 VALUES('test', 'test')
INSERT INTO #BigGuy4 VALUES('test', 'test')
INSERT INTO #BigGuy5 VALUES('test', 'test')
INSERT INTO #BigGuy1 VALUES('test', 'test')
INSERT INTO #BigGuy2 VALUES('test', 'test')
INSERT INTO #BigGuy3 VALUES('test', 'test')
INSERT INTO #BigGuy4 VALUES('test', 'test')
INSERT INTO #BigGuy5 VALUES('test', 'test')
 
DELETE #freespace
 
INSERT INTO #freespace
EXEC master..xp_fixeddrives
 
SELECT @free = [MB Free]
FROM #freespace
WHERE drive = @drive
 
print @free
 
END

This one uses a loop to check how much free space you have left. You can also set the threshold. For example, if you want to have 100MB free, you would just change @free > 1 to @free > 100.

With a word of caution, do NOT run this on your production system. We used this for a training session in a TEST environment.

The easy part about this one is that when you are finished expanding your tempdb, you can just restart your SQL services and it will reset the size.

Have fun!

 

Share

#sqlsat171 – behind the scenes

This year was the first year Pittsburgh had a SQL Saturday. Luckily, I was able to find out about it a couple months beforehand and was able to get involved. I didn’t help out that much but I was still was able to lend a hand. It was the first time I was able to be part of this kind of event. In the past I have always been the one who went to the sessions but this time I got to see behind the scenes.

It started when I signed up (one of the first days!) and checked the “would you like to volunteer” checkbox during the registration process (which was very easy, btw). I then received a series of emails for in person meetings or phone conference meetings. I was able to attend a couple of the phone calls but didn’t really have much input to offer. The only in person meeting I was able to attend was a meeting after our local user group meeting. It was a great one to go to because it was the one where we picked out all the sessions. It was a long process but I think we all were satisfied with the outcome. We had a solid slammed packed day planned.

After that, I helped out a bit by contacting the former Chicago SQL user group president, Bill Lescher and asking for some words of wisdom. He had some good advice about getting sponsors. I also contacted Pluralsight.com about sponsoring the event (which they did!!!)

So then some more emails and then finally the day came. Volunteers had to be there at 6am so that meant I had to wake up at 5am to get ready and drive up to the north side of Pittsburgh. Of course I had to stop on the way and pick up some caffeine in Monster form.

ready?????

ready!

 

My duties for the day were to be room proctor. I was in charge of getting the speakers what they needed, handing out and collecting evaluations, and making sure we stayed within the time limits. It was an easy job because I got to attend sessions at the same time.

So how were the sessions? GREAT! It was nice to spend a whole day listening to and learning about a technology that you find interesting.

I started off my day to a nice intro to Service Broker by Adam Belebczuk. Service broker can get a bit complicated if you have never used it before. I thought Adam did a great job cramming a ton of info into 1 hour. Great presentation skills as well (ZOOM IT!).

Next up was Dean Richards talking about VMware. It was another good introduction to VMWare. If you never used VMWare before then this was your course. It was a nice recap for me.

The final morning session was Mike Hiilwig‘s session of 7 SQL Agent jobs that I should be running. It was a very entertaining session that show some great methods for DB maintenance. Personally, I like to use powershell for monitoring errors over multiple servers but Mike’s ideas were also very good. I hope to see more sessions by him in the future.

LUNCH BREAK – The only take away I got from lunch was that college students don’t get dressed on weekends. The event was held at La Roche college. We didn’t eat until around 12:30. While I waited in line to get food with my other SQL Sat-ers, I noticed that every single college student I saw was still in their PJ’s or sweat pants. It was fall break though. ii-na.

After lunch I went to Justin Dearing‘s session on MongoDB. It was very cool to be exposed to noSQL. I’ve read about it before but have never seen it up close. It’s something I doubt I will ever use but now I feel I have a better understanding of it.

The next session was Michael John’s Top Development and Design Mistakes. It was a packed room for some antodotes of Michael’s experience in the field.

My final session of the day might have been my favorite. It was Sam Nasr’s 14 new SQl functions in 2012. It was exactly what the title says it was. We went over them 1 by 1. It was a great session that was more like a class. The whole group was engaged in the session. We actually had some time left over and explored these functions a bit more in depth.

After the sessions finished everybody filed back into the main hall for the giving out of the prizes and the final closing remarks.

FREE STUFF!!

After the day was over, some people headed to the bar for a couple SQL drinks. It was a nice way to finish off a great day. Overall, I enjoyed helping out (the little bit I did) and thought it was a great thing for the city of Pittsburgh. I hope we have more of these and the community is continues to support community training.

Thank you to everybody involved!!

 

 

 

Share

Changing physical locations of all DBs

I got a request this week from an application engineer. The request was to move ALL physical database files from one physical drive to another physical drive on the same machine.

Hmmmm, I have never moved ALL .mdf files from one place to another at one time. One of my teammates told the application engineer that it may take some time. He want to detach/move/attach all the DBs.

I knew this wouldn’t work because:

1) it would take forever

2) how are you going to detach the master DB?

No Detach for YOU!

So my solution contained 3 parts:

  1. Change the location of all the files inside SQL
  2. shutdown and move the files
  3. Change SQL’s startup parameters for the master database

Easy as π.

I used some good old dynamic SQL to create an ALTER DATABASE script for each db (except the master)

--first, let's just take a look and see where our files are living
SELECT name, physical_name,
CASE
WHEN TYPE = 0 THEN 'DATA'
WHEN TYPE = 1 THEN 'LOG'
END AS FileType
FROM sys.master_Files
 
--Now let's create those ALTER DB statements
SELECT 'Alter DATABASE ' + d.name + ' MODIFY FILE ( NAME = ' + f.name + ', FILENAME = ' +
 
--here is where you want to put where you want to move your files
 
' ''c:\NewFolder\'+
right(physical_name, charindex('\',reverse(physical_name))-1 ) + ''' ) '
FROM sys.databases d
inner join sys.master_files f on d.database_id = f.database_id
where 
 --remove this if you want both DB AND log file
 type = 0 
--and keep this part, dbid 1 is master
 and d.database_id <> 1
 
--Check again to see if the physical name is changed
select name, physical_name,
case
when type = 0 then 'DATA'
when type = 1 then 'LOG'
end as FileType
from sys.master_Files

After you run this, you should see this message a bunch of times:

The file “tempdev” has been modified in the system catalog. The new path will be used the next time the database is started.

Now you can shut down your SQL services and move the physical files to their new location.

Once you have all the files moved over (including the master) you’ll need to change the startup parameters.

To do this:

  1. open up the configuration manager
  2. Right click on your SQL instance –> properties
  3. Click on advanced –> startup parameters
  4. Change the location of the master .mdf

Change the location here

Once you have updated your startup parameters, you should be good to go. I want to stress “should” here because you have to make sure that your service account has access to the new location. If it doesn’t, SQL won’t start.
I tired this a couple times on my test machine and then in the production environment. I was hoping to “wow” the customer with the speed of completing this task, unfortunately their disks were really slow and moving the files took the majority of the time.
Share

WTF is the DL on COTS EMR DBs?

Well it seems I have missed April and allllllmost May. I had a nice 2.5 month vacation from work but just this week started a new position in Pittsburgh. During my time off I monitored #sqlhelp, read some articles, and tried to keep up with the ever-changing world of databases. I also did a lot of relaxing, played too much skyrim, and traveled quite a bit.

The company I started working for is having a conference for their users this week so almost no one is in the office. It gave me a good chance to read through their documentation.

It’s very easy to get lost in documentation when you are new to a company. Especially when most companies these days use lots and lots of acronyms. Luckily, a company acronym dictionary was given to me in my intro packet. It came in handy while reading through documents and meeting with various people. It’s still easy to get lost in conversations involving simple things.

Do you use a lot of acronyms at your company? Do you document them? Love em/Hate em?

Share

1 year old sandwich

It’s been an interesting day. It was the 1 year anniversary of SQLSandwiches today. It was also my last day working in New Zealand. That’s right, I decided to leave my current role as DBA at Foodstuffs. Mrs. Sandwiches and I are headed back to the States to be closer to our family.

Foodstuffs was my second job in New Zealand. I enjoyed my time working there. My managers were great and let me database my day away. I learned a lot and worked with great people. There is a reason people stay there for a long time and after only a short while, I can see why. I also had the best ID ever there.

you got something better?

Back to the sandwiches. I’ve totally slacked off on the blog so far this year in preparation of this big move. Now that I will have some down time, I can catch up on all the half written blogs that I have in my backlog. I’ll be taking the next couple weeks off to travel around Japan and then to the States! Until then it’s going to be lots of reading, running, blogging, and studying for some exams.

So here’s to 1 year blogging under SQLSandwiches and the end of a short but great job in Auckland.

BEER!!!

Final words to Foodstuffs…David, you can keep DTM! HA!

w...t...f....

Share

Find failed SQL jobs with powershell

This weekend I went to SQL Saturday in Auckland. It was nice to interact with other DBAs again.  Kent Chenery (blog | twitter) did a nice session on powershell for n00bs and it got me thinking that I need to start posting some of my powershell stuff up here.

When I started my current job I was given a handover document that had a list of standard operating procedures, some links to some useful info, and a “daily morning checks” list. When I first read this list I was amazed. It went something like this:

  1. Log on to server A
  2. Open up management studio
  3. Open up SQL Agent’s job
  4. Order by last outcome
  5. Rerun the jobs that failed
  6. Repeat for servers B, C, D….and on

Ouch..this was all done manually. This would take way too long to just “check” to see if these jobs failed. To fix this issue I turned to my good friend – powershell. I came up with little script to hit up all my servers, check the SQL jobs, and mail me if a job failed in the last 24 hours.

#Find Failed SQL Jobs with Powershell
#by Adam Mikolaj
#www.sqlsandwiches.com
 
 
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SMO") | out-null;
 
 
#let's get our list of servers. For this, create a .txt files with all the server names you want to check.
$sqlservers = Get-Content "C:\DevWork\scripts\computers.txt";
 
#we'll get the long date and toss that in a variable
$datefull = Get-Date
#and shorten it
$today = $datefull.ToShortDateString()
 
#let's set up the email stuff
$msg = new-object Net.Mail.MailMessage
$smtp = new-object Net.Mail.SmtpClient("ExchangeServerNameHere")
$msg.Body = “Here is a list of failed SQL Jobs for $today (the last 24 hours)”
 
 
#here, we will begin with a foreach loop. We'll be checking all servers in the .txt referenced above.
foreach($sqlserver in $sqlservers)
{
 
    #here we need to set which server we are going to check in this loop
    $srv = New-Object "Microsoft.SqlServer.Management.Smo.Server" $sqlserver;
 
        #now let's loop through all the jobs
        foreach ($job in $srv.Jobserver.Jobs)
        {
            #now we are going to set up some variables. 
            #These values come from the information in $srv.Jobserver.Jobs
            $jobName = $job.Name;
        	$jobEnabled = $job.IsEnabled;
        	$jobLastRunOutcome = $job.LastRunOutcome;
            $jobLastRun = $job.LastRunDate;
 
 
            #we are only concerned about jos that are enabled and have run before. 
            #POSH is weird with nulls so you check by just calling the var
            #if we wanted to check isnull() we would use !$jobLastRun  
            if($jobEnabled = "true" -and $jobLastRun)
                {  
                   # we need to find out how many days ago that job ran
                   $datediff = New-TimeSpan $jobLastRun $today 
                   #now we need to take the value of days in $datediff
                   $days = $datediff.days
 
 
                       #gotta check to make sure the job ran in the last 24 hours     
                       if($days -le 1 )                    
                         {       
                            #and make sure the job failed
                            IF($jobLastRunOutcome -eq "Failed")
                            {
                                #now we add the job info to our email body. use `n for a new line
                			    $msg.body   = $msg.body + "`n `n FAILED JOB INFO: 
                                 SERVER = $sqlserver 
                                 JOB = $jobName 
                                 LASTRUN = $jobLastRunOutcome
                                 LASTRUNDATE = $jobLastRun"
 
                            }    
                          } 
                }
 
 
        }
}
 
#once all that loops through and builds our $msg.body, we are read to send
 
#who is this coming from
$msg.From = "adam@sqlsandwiches.com"
#and going to
$msg.To.Add("adam@sqlsandwiches.com")
#and a nice pretty title
$msg.Subject = "FAILED SQL Jobs for $today"
#and BOOM! send that bastard!
$smtp.Send($msg)

Just set this script up to run every morning as a scheduled task and you can enjoy a nice SQL Sandwich instead of plowing through SQL agent jobs.

Next up, I’ll do the same job but use the psx extensions. Wonder which is easier…

Share

Bring it on 2012

Well, I’m about a week and a half late with the whole New Year blog post but I figure better late than never.

So just a little recap of my year. 2011 was the first year I started blogging about SQL. It was pretty rewarding and I hope to do more of it this year. When I got in the zone things just flowed but when work or other life distractions got in the way things slowed down a bit.

Last year I was able to learn a good bit about:

  • Powershell, Powershell, Powershell – I am still at the beginning of my journey but some of the things I have done so far have saved me so much time.
  • Access (unfortunately) – some critical systems got tossed my way that were dependant on access. I had to learn how to deal with them. It wasn’t fun but it was awesome once everything worked (for that week).
  • Big DBs – This year I got to work with some monsters. It’s been a good oppurtunity so far. It makes tuning even more fun.
  • Indexes – used, needed, overlapping, and of course…fragmented.
  • DBCC commands – Just when you think you know some of these, you learn a new one that totally changes how you work.
  • Twitter – this is the first I have ever used twitter. I used to think it was the stupidest thing in the world. I started using it a bit and holy crap, a whole new world opened up. I can’t believe the people I have interacted with on there. SQL has a great community and I hope I can give back a little more this year.
  • Double Hoping and SPNs – every DBA needs to go through this at least once.
  • Extended Events – Profiler? Who needs that?

Overall it’s been another great year to be a DBA. I like what I do and just want to get better and increase the awesome meter.

This year I’ll be focusing a lot on powershell, puffy clouds of computers, 2012 of course, extended events, and probably get another cert or 3. I WILL also present 1 or 2 times this year. Something I have never done before. If you got advice, send it my way.

The other area I am hoping to dip more into is BI. I have a feeling that there will be a big industry swing once 2012 comes out for really good BI people. I’m hoping to jump on that wagon.

I will also be moving to the states in the near future. That is going to be a whole new world of adventure that I am ready for.

So 2012, bring it.

Share

What’s your build number?

Without looking, name your most important server. Or if you have a bunch, name the top 2 or 3.

Now tell me what edition you are running. Not too hard. “SQL 2008 R2 Standard” or whatever.

Again, with out looking – what’s the build number? Hmmmm. 10.5…something or other.

Now imagine your server gets hit with a bad case of the Hulk Smash’s and it’s gone. Well good thing you have backups.

Now let’s just take our master database and restore that sucker right there on our backup machine. Uh-oh. What’s that error? 3168!?!?! NOO!!!

whoops...

If you need to restore a master database (and let’s hope you don’t), you better know what build it is before you start down that path. What’s the best way to know all your build numbers for all your servers? Ugh…documentation. I know, I know. We all hate it but in this case it’s going to save you a headache.

Granted, you can get the build number from the error message above but do you really want to spend the time adding a service pack to your backup server before restoring your master DB? I’m sure the big boss man doesn’t want to have extra downtime.

I use a department wiki that has facts about all the production servers. Sometimes it’s a pain to keep updated but it’s necessary.

Do you keep your build numbers documented?

 

Share

One Delete at a Time – Powershell Slice

A co-worker approached me the other day asking for some help with a 3rd party tool. This tool was some sort of auditing software that kept creating trace files and then doing something else with the data.

My co-worker came to me because this tool was sucking up disk space by consistently rolling over 100mb trace files. The tool was supposed to delete theses files based on name, but since they were rolling over too quickly their naming convention was altered.

Normally, they were naming the files something like, “tracefile_1″ and then the next should be, “tracefile_2″. Since they were rolling over too quickly they were being named, “tracefile_2-1″ and “tracefile_2-2″. The delete job wasn’t setup to eliminate this kind of file. I dug around their code a bit to figure out what it did but instead of going through the whole process of altering and tool, filling out forms, and all that fun stuff, I just wrote a powershell script to delete the oldest file in the folder.

So here is my solution: I set it up to run every hour. It checks the trace file folder and deletes the oldest file one at a time until it reaches under 3gbs.

#put our folder we want to check here
$folder = "c:\temp2"
#now we need to see how big that folder is
$foldersize = (Get-ChildItem $folder | Measure-Object -property length -sum )
#and convert it to GB's
$GBsize = "{0:N2}" -f ($foldersize.sum/ 1GB)
 
#now, let's check to see if it's over 3 GBs
If ($GBsize -gt 3)
    #if it is, we want to DO the following
    {do
        #Let's get the 1st file (sorted by lastwrite time and remove it
        {dir $folder | sort lastwritetime | select -first 1 | remove-item -force
            #now let's recheck the folder size
            $foldersize = (Get-ChildItem $folder | Measure-Object -property length -sum )
            $GBsize = "{0:N2}" -f ($foldersize.sum/ 1GB)
            #print the folder size for testing
            $Gbsize
        }
        #is the folder less than 3gb? Yes, we are done. No, go back and delete another file
        until ($GBsize -lt 3)
        Write-Host "Deletes Done"
    }
 else {"No deletes Needed"}

Does anyone know a better way to do this?

Share