1 year old sandwich

It’s been an interesting day. It was the 1 year anniversary of SQLSandwiches today. It was also my last day working in New Zealand. That’s right, I decided to leave my current role as DBA at Foodstuffs. Mrs. Sandwiches and I are headed back to the States to be closer to our family.

Foodstuffs was my second job in New Zealand. I enjoyed my time working there. My managers were great and let me database my day away. I learned a lot and worked with great people. There is a reason people stay there for a long time and after only a short while, I can see why. I also had the best ID ever there.

you got something better?

Back to the sandwiches. I’ve totally slacked off on the blog so far this year in preparation of this big move. Now that I will have some down time, I can catch up on all the half written blogs that I have in my backlog. I’ll be taking the next couple weeks off to travel around Japan and then to the States! Until then it’s going to be lots of reading, running, blogging, and studying for some exams.

So here’s to 1 year blogging under SQLSandwiches and the end of a short but great job in Auckland.

BEER!!!

Final words to Foodstuffs…David, you can keep DTM! HA!

w...t...f....
Share

Find failed SQL jobs with powershell

This weekend I went to SQL Saturday in Auckland. It was nice to interact with other DBAs again.  Kent Chenery (blog | twitter) did a nice session on powershell for n00bs and it got me thinking that I need to start posting some of my powershell stuff up here.

When I started my current job I was given a handover document that had a list of standard operating procedures, some links to some useful info, and a “daily morning checks” list. When I first read this list I was amazed. It went something like this:

  1. Log on to server A
  2. Open up management studio
  3. Open up SQL Agent’s job
  4. Order by last outcome
  5. Rerun the jobs that failed
  6. Repeat for servers B, C, D….and on

Ouch..this was all done manually. This would take way too long to just “check” to see if these jobs failed. To fix this issue I turned to my good friend – powershell. I came up with little script to hit up all my servers, check the SQL jobs, and mail me if a job failed in the last 24 hours.

#Find Failed SQL Jobs with Powershell
#by Adam Mikolaj
#www.sqlsandwiches.com
 
 
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SMO") | out-null;
 
 
#let's get our list of servers. For this, create a .txt files with all the server names you want to check.
$sqlservers = Get-Content "C:\DevWork\scripts\computers.txt";
 
#we'll get the long date and toss that in a variable
$datefull = Get-Date
#and shorten it
$today = $datefull.ToShortDateString()
 
#let's set up the email stuff
$msg = new-object Net.Mail.MailMessage
$smtp = new-object Net.Mail.SmtpClient("ExchangeServerNameHere")
$msg.Body = “Here is a list of failed SQL Jobs for $today (the last 24 hours)”
 
 
#here, we will begin with a foreach loop. We'll be checking all servers in the .txt referenced above.
foreach($sqlserver in $sqlservers)
{
 
    #here we need to set which server we are going to check in this loop
    $srv = New-Object "Microsoft.SqlServer.Management.Smo.Server" $sqlserver;
 
        #now let's loop through all the jobs
        foreach ($job in $srv.Jobserver.Jobs)
        {
            #now we are going to set up some variables. 
            #These values come from the information in $srv.Jobserver.Jobs
            $jobName = $job.Name;
        	$jobEnabled = $job.IsEnabled;
        	$jobLastRunOutcome = $job.LastRunOutcome;
            $jobLastRun = $job.LastRunDate;
 
 
            #we are only concerned about jos that are enabled and have run before. 
            #POSH is weird with nulls so you check by just calling the var
            #if we wanted to check isnull() we would use !$jobLastRun  
            if($jobEnabled = "true" -and $jobLastRun)
                {  
                   # we need to find out how many days ago that job ran
                   $datediff = New-TimeSpan $jobLastRun $today 
                   #now we need to take the value of days in $datediff
                   $days = $datediff.days
 
 
                       #gotta check to make sure the job ran in the last 24 hours     
                       if($days -le 1 )                    
                         {       
                            #and make sure the job failed
                            IF($jobLastRunOutcome -eq "Failed")
                            {
                                #now we add the job info to our email body. use `n for a new line
                			    $msg.body   = $msg.body + "`n `n FAILED JOB INFO: 
                                 SERVER = $sqlserver 
                                 JOB = $jobName 
                                 LASTRUN = $jobLastRunOutcome
                                 LASTRUNDATE = $jobLastRun"
 
                            }    
                          } 
                }
 
 
        }
}
 
#once all that loops through and builds our $msg.body, we are read to send
 
#who is this coming from
$msg.From = "adam@sqlsandwiches.com"
#and going to
$msg.To.Add("adam@sqlsandwiches.com")
#and a nice pretty title
$msg.Subject = "FAILED SQL Jobs for $today"
#and BOOM! send that bastard!
$smtp.Send($msg)

Just set this script up to run every morning as a scheduled task and you can enjoy a nice SQL Sandwich instead of plowing through SQL agent jobs.

Next up, I’ll do the same job but use the psx extensions. Wonder which is easier…

Share

Bring it on 2012

Well, I’m about a week and a half late with the whole New Year blog post but I figure better late than never.

So just a little recap of my year. 2011 was the first year I started blogging about SQL. It was pretty rewarding and I hope to do more of it this year. When I got in the zone things just flowed but when work or other life distractions got in the way things slowed down a bit.

Last year I was able to learn a good bit about:

  • Powershell, Powershell, Powershell – I am still at the beginning of my journey but some of the things I have done so far have saved me so much time.
  • Access (unfortunately) – some critical systems got tossed my way that were dependant on access. I had to learn how to deal with them. It wasn’t fun but it was awesome once everything worked (for that week).
  • Big DBs – This year I got to work with some monsters. It’s been a good oppurtunity so far. It makes tuning even more fun.
  • Indexes – used, needed, overlapping, and of course…fragmented.
  • DBCC commands – Just when you think you know some of these, you learn a new one that totally changes how you work.
  • Twitter – this is the first I have ever used twitter. I used to think it was the stupidest thing in the world. I started using it a bit and holy crap, a whole new world opened up. I can’t believe the people I have interacted with on there. SQL has a great community and I hope I can give back a little more this year.
  • Double Hoping and SPNs – every DBA needs to go through this at least once.
  • Extended Events – Profiler? Who needs that?

Overall it’s been another great year to be a DBA. I like what I do and just want to get better and increase the awesome meter.

This year I’ll be focusing a lot on powershell, puffy clouds of computers, 2012 of course, extended events, and probably get another cert or 3. I WILL also present 1 or 2 times this year. Something I have never done before. If you got advice, send it my way.

The other area I am hoping to dip more into is BI. I have a feeling that there will be a big industry swing once 2012 comes out for really good BI people. I’m hoping to jump on that wagon.

I will also be moving to the states in the near future. That is going to be a whole new world of adventure that I am ready for.

So 2012, bring it.

Share

What’s your build number?

Without looking, name your most important server. Or if you have a bunch, name the top 2 or 3.

Now tell me what edition you are running. Not too hard. “SQL 2008 R2 Standard” or whatever.

Again, with out looking – what’s the build number? Hmmmm. 10.5…something or other.

Now imagine your server gets hit with a bad case of the Hulk Smash’s and it’s gone. Well good thing you have backups.

Now let’s just take our master database and restore that sucker right there on our backup machine. Uh-oh. What’s that error? 3168!?!?! NOO!!!

whoops...

If you need to restore a master database (and let’s hope you don’t), you better know what build it is before you start down that path. What’s the best way to know all your build numbers for all your servers? Ugh…documentation. I know, I know. We all hate it but in this case it’s going to save you a headache.

Granted, you can get the build number from the error message above but do you really want to spend the time adding a service pack to your backup server before restoring your master DB? I’m sure the big boss man doesn’t want to have extra downtime.

I use a department wiki that has facts about all the production servers. Sometimes it’s a pain to keep updated but it’s necessary.

Do you keep your build numbers documented?

 

Share

One Delete at a Time – Powershell Slice

A co-worker approached me the other day asking for some help with a 3rd party tool. This tool was some sort of auditing software that kept creating trace files and then doing something else with the data.

My co-worker came to me because this tool was sucking up disk space by consistently rolling over 100mb trace files. The tool was supposed to delete theses files based on name, but since they were rolling over too quickly their naming convention was altered.

Normally, they were naming the files something like, “tracefile_1” and then the next should be, “tracefile_2”. Since they were rolling over too quickly they were being named, “tracefile_2-1” and “tracefile_2-2”. The delete job wasn’t setup to eliminate this kind of file. I dug around their code a bit to figure out what it did but instead of going through the whole process of altering and tool, filling out forms, and all that fun stuff, I just wrote a powershell script to delete the oldest file in the folder.

So here is my solution: I set it up to run every hour. It checks the trace file folder and deletes the oldest file one at a time until it reaches under 3gbs.

#put our folder we want to check here
$folder = "c:\temp2"
#now we need to see how big that folder is
$foldersize = (Get-ChildItem $folder | Measure-Object -property length -sum )
#and convert it to GB's
$GBsize = "{0:N2}" -f ($foldersize.sum/ 1GB)
 
#now, let's check to see if it's over 3 GBs
If ($GBsize -gt 3)
    #if it is, we want to DO the following
    {do
        #Let's get the 1st file (sorted by lastwrite time and remove it
        {dir $folder | sort lastwritetime | select -first 1 | remove-item -force
            #now let's recheck the folder size
            $foldersize = (Get-ChildItem $folder | Measure-Object -property length -sum )
            $GBsize = "{0:N2}" -f ($foldersize.sum/ 1GB)
            #print the folder size for testing
            $Gbsize
        }
        #is the folder less than 3gb? Yes, we are done. No, go back and delete another file
        until ($GBsize -lt 3)
        Write-Host "Deletes Done"
    }
 else {"No deletes Needed"}

Does anyone know a better way to do this?

Share

I will count you all – Return row count for all tables

I’ve built some pretty rad SSIS packages to move massive amounts of data from an older ingres database system into SQL.

My package moves millions and millions of records in numerous amounts of tables in about 15-20 minutes.

When I was testing this sucker, I really wanted to monitor how many rows were getting inserted into each table. Since I am impatient and need a fresh slice of status all the time, I created this little diddy to return all row counts for all tables:

IF Object_id('tempdb..#count_table') IS NOT NULL
  DROP TABLE #count_table 
 
CREATE TABLE #count_table
  (
     TABLE_NAME VARCHAR(100),
     ROW_COUNT  INT
  ) 
 
INSERT INTO #count_table
EXEC sys.Sp_msforeachtable ' SELECT ''?'', count(*) from ?' 
 
SELECT *
FROM   #count_table
ORDER  BY ROW_COUNT DESC

Anyone know a better way?

UPDATE: I’ve tried another method below. This should should be faster

IF OBJECT_ID('tempdb..#sizeo') IS NOT NULL 
DROP TABLE #sizeo
 
CREATE TABLE #sizeo (
NAME VARCHAR(100),
ROWS INT,
reserved VARCHAR(50),
DATA VARCHAR(50),
index_size VARCHAR(50), 
unused VARCHAR(50)
)
 
EXEC sp_msforeachtable '
insert into #sizeo
EXEC sp_spaceused ''?''
'
SELECT name, ROWS FROM #sizeo
Share

How pure is your data?

It was a normal Monday. I was going over some HA designs and planning the joys of setting up log shipping on a SQL 2000 instance when I got one of “those” emails.

“Hey Database Person, I’m trying to run this query but I keep getting this message:

Msg 9100, Level 23, State 2, Line 1
Possible index corruption detected. Run DBCC CHECKDB.

Can you fix it? I need to run these reports for our production line ASAP!”

Crap. I was just reading an article about all different kinds of corruption last week but didn’t run into this.

My first thought was to do what SQL told me to do, which was Run DBCC CheckDB. Luckily, it wasn’t that large of a database.  After it finished, it returned fine and dandy with no errors. Cool, let’s try that query again and hope everything is fine.

Msg 9100, Level 23, State 2, Line 1
Possible index corruption detected. Run DBCC CHECKDB.

Crap part 2. Possible index corruption? Maybe; SQL says it’s possible. I needed to rebuild the indexes on this table. Since the table contained over 11 million records, I double-checked with the person in charge of the database to make sure nothing would be damaged. I also checked out what was happening within the server. The server was relatively calm and had very low CPU usage. Nice, let’s go.

Couple minutes later, indexes were rebuilt. Awesome, let’s run that query again and see our results.

Msg 9100, Level 23, State 2, Line 1
Possible index corruption detected. Run DBCC CHECKDB.

You son of a…. Well, there weren’t any errors and the indexes were fine. What else could it be? Good ‘ol Google led me to a couple different pages where I finally found CHECKDB with DATA_PURITY.

I ran DBCC CHECKTABLE with DATA_PURITY because I knew exactly which table had the problems. Oh, did the error messages fly then. So many in fact that SQL said there were too many to list (sounds like someone is being lazy if you ask me). I tried to manually find a record or two that had contained bad data.  The column was a float so I figured it might have had some crazy number shoved in there somewhere.

Then I spoke with the person who sent me the error and she said that the DB was upgraded to 2008 from 2000 in the last year. Before, they used “NAN” – not a number, in that column. Bingo. That was it. Now I just had to find the bad records.

I took a backup of the DB and restored it to a dev playground. Next I did a nice SELECT INTO a new table but instead of keeping the column as a float, I converted it a varchar. I checked out a couple of the bad records I found before and lo and behold, there were about 3,200 records that had “#”.

Finally, I contacted the DB owner and told them about the bad data. Luckily, since the records were old they told me to just NULL them out to fix the data. A quick UPDATE statement later and it was back to check the original query.

Booyah! Query complete.

In today’s lesson, I learned about checking the purity of your data.  Also, it’s something to check for when upgrading databases. As BOL says: “For databases that were created in earlier versions of SQL Server, such as SQL Server 2000, SQL Server 7.0, and versions upgraded to SQL Server 2005, these checks are not enabled by default.”

Share

Code Camp New Zealand 2011

I was unable to attend TechEd this year but I did get a chance to hit up Code Camp. It was a one day free event with four different tracks of six sessions each. I attended all of the SQL sessions and learned some good stuff.

The first session was on Full Text Search by Kent Chenery (blog | twitter). It was a good overview on FTS. I’ve been studying a bit for some upcoming certification exams so reviewing CONTAINS vs FREETEXT as well as FORMSOF was great for me. I didn’t know about the 50 pre-installed filters, either.

Next was supposed to be a session about monitoring your server for $0.00, but speaker Dave Dustin couldn’t speak (literally, he lost his voice). We watched part of a powershell video from PASS 2010 instead.

Then came Steven Wang’s session on minimal logging and moving large chunks of data. He talked about the best way to update/delete large volumes of data by moving what you want to keep and truncating the rest. It was an interesting talk. I want to play around and try out some  more of his ideas. He also covered trace flag 610: which I never heard about.

Last but not least was Leo Miller‘s session on SQL security. He went over how to access SQL using only the builtin\admin group. I’ve used this before and I don’t think it’s that terrible. I was new at a job and no one knew the SA password for one SQL instance. He noted that this feature will be gone in Denali. I have mixed feelings about that. Miller did cover some really cool stuff on SQL injections, though. I never even thought about encrypting SQL commands in HEX, putting the command into a variable, and then executing that variable. He also mentioned putting sp_password in your injection to cover your tracks. Overall, this was my favorite presentation. I wish we had more time to go into detail and actually get to bust out our laptops and play around.

I really enjoyed Code Camp 2011. I can’t wait for the next one. As for Auckland, I think the next event we have coming up is SQLSaturday. Woohoo!

Share

Splitting IDs with commas, updating them, then slamming them back together!

Unfortunately, my company still uses MS Access DBs. That’s Access 97, folks! I’ve battled many challenges over the last couple months with these databases. It hasn’t been fun, but it’s been a great learning experience.

I had a fun adventure last week dealing with one of these DBs. We have a user access DB that connects to our SQL user DB. If HR updates something in their system, it also gets updated to the Access DB system. Then we have a job that updates the SQL DB with the Access DB. The job isn’t done in SQL, though. The dev team was given a task to update that job. The dev team that was lucky enough to get this task made a simple change that should have been a cakewalk.

The update was completed easily, but no one took consideration of the dependences. We didn’t even know we had dependences on this old job/db until the emails started rolling in.

Long story short, we found the problem and were able to update it. The bad part was fixing the busted data. We found out that this DB was tied to two other MS-Access DBs. D’OH!

From there I was given the task of fixing the data in the other two DBs a.s.a.p so people could work on their documents. I thought, “No big deal, update some old Access DBs”. Then I saw the data in the DBs.

The field I had to update was tied to the UserID in the User DB. This wouldn’t have been a problem except there were many cases where there was more than one UserID in that column.

Instead of a nice USERID = 123, I had loads of USERID = 123,234.

My game plan was to take all those multiple USERIDs, split them apart, update them, then slam them back together again. How the hell was I supposed to do this in Access?

Instead of making this a novel, I’ll just get to the juicy stuff. The main goal was to update old USERIDs with new USERIDs.

The first thing I did was acquire the latest and greatest ID and toss that dude into a temp table.

SELECT USERID, MAX(USERID) AS MAXUID INTO #max
FROM USERTABLE
WHERE USERID <> USERID
GROUP BY USERID

Next I needed to get all the columns that had commas, split them up, and put them into a temp table. For this I used a snippet I found here that used CTE and XML

WITH Cte AS
( SELECT
        TableID,
       CAST('' + REPLACE( USERID,  ',' , '') + '' AS XML) AS USERID
      FROM AccessDB...CommaTable    
 
)
SELECT
   TableID,
    Split.a.value('.', 'VARCHAR(100)') AS UID,
    0 AS updated INTO #splitter
FROM Cte
CROSS APPLY UID.nodes('/M') Split(a)

Next, I updated the temp table(#splitter) with the newest IDs

UPDATE  #splitter
SET UserID = m.maxuid,
updated = 1
FROM #splitter s INNER JOIN #newUID m ON s.UID = m.uid
WHERE tableid IN (
SELECT tableid
FROM #splitter
GROUP BY tableid
HAVING COUNT(tableid) > 1)

Finally, I had to smash those guys back together as they were before. I used another temp table to put them back to together

SELECT t.tableid,
      --Using the STUFF command
       STUFF(ISNULL((SELECT ', ' + x.USERID
                FROM #splitter x
               WHERE x.tableid = t.tableid
            GROUP BY x.USERID
            --AND using XML
             FOR XML PATH (''), TYPE).value('.','VARCHAR(max)'), ''), 1, 2, '') AS USERID INTO #updated
  FROM #splitter t
  WHERE updated = 1
GROUP BY tableid

And then I  updated the real table with the updated records

UPDATE AccessDB...CommaTable
SET USERID = u.USERID
FROM #updated u
      INNER JOIN AccessDB...CommaTable  p ON u.tableid =
p.tableid

It was a nice adventure and hopefully it’s something I can use again.
Have you been in this situation before? What did you do? What would you do differently?

Share

How to make a Big Ugly Nasty Test Table

Do you need to fill up a database with terrible data to test restore times? Do you want to see how fast you can fill up your harddrive?

 

Well try out this little bit of code on your DB and watch the MB’s get sucked up!

CREATE TABLE BigGuy (column1 CHAR(8000))
CREATE TABLE BigGuy2 (column1 CHAR(8000))
CREATE TABLE BigGuy3 (column1 CHAR(8000))
CREATE TABLE BigGuy4 (column1 CHAR(8000))
CREATE TABLE BigGuy5 (column1 CHAR(8000))
DECLARE @counta INT = 1 while @counta <= 1000000000 
 BEGIN
INSERT INTO BigGuy VALUES('test')
 INSERT INTO BigGuy2 VALUES('test')
 INSERT INTO BigGuy3 VALUES('test') 
 INSERT INTO BigGuy4 VALUES('test') 
 INSERT INTO BigGuy5 VALUES('test') 
 SET @counta = @counta +1 
END

This should suck up about a GB a minute. Want to suck up more space? Add more tables and make sure your database is set to FULL recovery model.

WARNING! – This is NOT A GOOD IDEA and should ONLY be used in controlled environments FOR TESTING PURPOSES

Share