T-SQL Tuesday – SQL Family and community

This month’s T-SQL Tuesday is being hosted by Jeffrey Verheul (B|T). Our topic of the month is SQL Family. Check out the host page here.

Here is our task:

” How do you feel about SQL Family? Did they help you, or did you help someone in the SQL Family? ”

I feel that the SQL Family is a great online community that helps each other out wherever and whenever they can. I don’t like to use the term SQL Family though. I prefer to keep work and personal life separate so I tend not to use the term “SQL Family”. When talking about my job and why I like what I do, one of the things I always bring up is the great online community.

When I first started learning about databases, I lived on sites like SQL Server Central and MSSQL tips. I loved how that people had tons of questions and other people just answered them…ACTIVELY. What was even better was that people weren’t jerks about answering basic or n00b questions.

Then I was introduced to twitter.  I started using twitter when I was working in New Zealand. My boss at the time was encouraging us to use twitter but I never really thought it was useful. Then I met the #SQLHelp hashtag. That is when I really understood what it was all about. For me, it’s like an ongoing discussion in a huge room of DBAs, developers, and Sys Admins. Great use of twitter. Almost always a good discussion on there.

As for me helping someone, I am currently on the committee that puts together Pittsburgh’s SQL Saturday (did you sign up yet?!). For the last 3 years I have helped organize this event. It’s been a great experience working with some great people. We periodically meet to discuss the event. It takes quite a bit of work to pull these off. From organizing speakers and figuring out what is for lunch (no pizza!) to actually getting people to come. The reward comes the day of the event though. It’s great seeing the last couple months worth of work all come together.

Overall I think the community as a whole is the best part about being a SQL DBA. Most of the time every problem you’ve ever had has been seen by someone else and that person will 99% of the time be willing to help you out with it. And if there is a time where you might have a new problem, there will always be about 50 people on #SQLHelp or stackexchange to give you some ideas of what to check. Thanks SQL peoples!

 

T-SQL Tuesday – Assumptions

This is my first attempt at contributing to T-SQL2sDay. The topic of the month is Assumptions. 

Assignment – Write about a big assumption you encounter at work, one that people are uncomfortable talking about.

I work for a vendor and our application(s) don’t live on a single server but on many different servers. We have message boxes, SQL servers, application servers, and so on.

When we get a call/email about our applications being slow or timing out, we usually log into the server and try to take a look and see where the pain point is. A lot of the blame is usually put onto SQL or the application.

Here is where the assumption comes in.

As a database administrator working for a vendor who deals a lot with the support team, we assume that the hardware was specced out and properly configured. Many times during conference calls this unease topic comes up. Sometimes these conversations go something like:

“Your application isn’t working, it’s so slow. We keep getting timeouts”

“We might need to give it a bit more memory, my laptop has more resources than the database server”

It’s not a conversation a lot of people want to have but sometimes it really isn’t the application or it really isn’t SQL’s fault.

Many users of the system, the administrators, engineers, and executives all assume that all the magical hardware is the appropriate speed, size, etc. But is it really? Is your hardware adequate for what your application is trying to do?

 

SQL Saturday #250

I’m currently helping to plan Pittsburgh’s next SQL Saturday. It should be a good day with lots of good sessions by some new (see Steve Hood‘s post here) and some returning speakers.

I’m personally looking forward to the session on Statistical Semantic Search. It’s something I haven’t used before.

Here is the current brochure. If you haven’t signed up yet, what are you waiting for?!

Welcome to SQLSaturday #250 in Pittsburgh

http://www.sqlsaturday.com/250/

SQLSaturday is a full day FREE  training event for SQL Server professionals and those wanting to learn more about SQL Server and network with the SQL Server community. Our event will be held Saturday September 14, 2013 at ITT Technical Institute, 5460 Campbells Run Road, Pittsburgh, PA 15205. Admittance to this event is free, but we charge a fee of $10.00 to help defer the cost of lunch, coffee/tea  and water which will be provided to attendees throughout the day. 

Pick from more than 30 different sessions and learn from the experts!! Check the schedule to take a peek at the sessions and the speakers. Some of our sponsors are interested in hiring. Don’t forget to stop by at the sponsors table during the event.
Please register soon as seating is limited, and let friends and colleages know about the event.

 New this year !! An entire track for “Wanna be DBAs”. Don’t know anything about SQL Server? This track is for you !! Explore the life of a DBA and see if it can be yours too. This is mainly focused towards students and novices. Once you complete your registration at http://www.sqlsaturday.com/250/eventhome.aspx . Register via eventbrite at https://wannabeadba.eventbrite.com/  also( only if you want to attend the “Wanna Be a DBA” track).

Check “Track 0” under the schedule http://sqlsaturday.com/250/schedule.aspx  for session details

Don’t forget to follow us on Twitter @SqlSaturdayPitt and join the conversation with the #SqlSat250 hashtag



PreConference Training (Optional and prepaid) – Friday Sept 13, 2013

Introduction to Performance Tuning in SQL Server 2012 presented by John Sterrett and Konstantin Melamud
Have you always wanted to dig into performance tuning but wasn’t sure where to start? During this all day pre-con you will be able to find out which SQL statements are causing your performance problems and you will be able to apply basic concepts to improve your general performance. You will also get all the free tools and scripts the pros use to to their systems.

Eventbrite - Introduction to Performance Tuning with SQL Server 2012

Tempdb Destroyer

One of my previous posts, I covered how to create a really big database with just garbage data in it. Recently, I was given a task to blow up tempdb. I thought about using a similar script and tried it out.

Of course it worked and it blew up TempDb but it also errored out. This wasn’t acceptable. I needed to find a way to blow up the tempdb with out it erroring out.

I tried a couple little things but ended up using the following script:

SET nocount ON
 
USE tempdb
 
IF OBJECT_ID('tempdb..#freespace') IS NOT NULL
DROP TABLE #freespace
IF OBJECT_ID('tempdb..#BigGuy1') IS NOT NULL
DROP TABLE #BigGuy1
IF OBJECT_ID('tempdb..#BigGuy2') IS NOT NULL
DROP TABLE #BigGuy2
IF OBJECT_ID('tempdb..#BigGuy3') IS NOT NULL
DROP TABLE #BigGuy3
IF OBJECT_ID('tempdb..#BigGuy4') IS NOT NULL
DROP TABLE #BigGuy4
IF OBJECT_ID('tempdb..#BigGuy5') IS NOT NULL
DROP TABLE #BigGuy5
GO
 
CREATE TABLE #BigGuy1 ( column1 CHAR(7100), column2 CHAR(900))
CREATE TABLE #BigGuy2 ( column1 CHAR(7100), column2 CHAR(900))
CREATE TABLE #BigGuy3 ( column1 CHAR(7100), column2 CHAR(900))
CREATE TABLE #BigGuy4 ( column1 CHAR(7100), column2 CHAR(900))
CREATE TABLE #BigGuy5 ( column1 CHAR(7100), column2 CHAR(900))
 
CREATE TABLE #freespace
( drive VARCHAR(1),
[MB Free] INT)
 
DECLARE @drive VARCHAR(1),
@free INT
 
SELECT @drive = LEFT(physical_name, 1)
FROM sys.master_files
WHERE database_id = DB_ID('tempdb')
AND TYPE = 0
 
INSERT INTO #freespace
EXEC master..xp_fixeddrives
 
SELECT @free = [MB Free]
FROM #freespace
WHERE drive = @drive
 
while @free > 1
BEGIN
INSERT INTO #BigGuy1 VALUES('test', 'test')
INSERT INTO #BigGuy2 VALUES('test', 'test')
INSERT INTO #BigGuy3 VALUES('test', 'test')
INSERT INTO #BigGuy4 VALUES('test', 'test')
INSERT INTO #BigGuy5 VALUES('test', 'test')
INSERT INTO #BigGuy1 VALUES('test', 'test')
INSERT INTO #BigGuy2 VALUES('test', 'test')
INSERT INTO #BigGuy3 VALUES('test', 'test')
INSERT INTO #BigGuy4 VALUES('test', 'test')
INSERT INTO #BigGuy5 VALUES('test', 'test')
INSERT INTO #BigGuy1 VALUES('test', 'test')
INSERT INTO #BigGuy2 VALUES('test', 'test')
INSERT INTO #BigGuy3 VALUES('test', 'test')
INSERT INTO #BigGuy4 VALUES('test', 'test')
INSERT INTO #BigGuy5 VALUES('test', 'test')
INSERT INTO #BigGuy1 VALUES('test', 'test')
INSERT INTO #BigGuy2 VALUES('test', 'test')
INSERT INTO #BigGuy3 VALUES('test', 'test')
INSERT INTO #BigGuy4 VALUES('test', 'test')
INSERT INTO #BigGuy5 VALUES('test', 'test')
INSERT INTO #BigGuy1 VALUES('test', 'test')
INSERT INTO #BigGuy2 VALUES('test', 'test')
INSERT INTO #BigGuy3 VALUES('test', 'test')
INSERT INTO #BigGuy4 VALUES('test', 'test')
INSERT INTO #BigGuy5 VALUES('test', 'test')
 
DELETE #freespace
 
INSERT INTO #freespace
EXEC master..xp_fixeddrives
 
SELECT @free = [MB Free]
FROM #freespace
WHERE drive = @drive
 
print @free
 
END

This one uses a loop to check how much free space you have left. You can also set the threshold. For example, if you want to have 100MB free, you would just change @free > 1 to @free > 100.

With a word of caution, do NOT run this on your production system. We used this for a training session in a TEST environment.

The easy part about this one is that when you are finished expanding your tempdb, you can just restart your SQL services and it will reset the size.

Have fun!

 

#sqlsat171 – behind the scenes

This year was the first year Pittsburgh had a SQL Saturday. Luckily, I was able to find out about it a couple months beforehand and was able to get involved. I didn’t help out that much but I was still was able to lend a hand. It was the first time I was able to be part of this kind of event. In the past I have always been the one who went to the sessions but this time I got to see behind the scenes.

It started when I signed up (one of the first days!) and checked the “would you like to volunteer” checkbox during the registration process (which was very easy, btw). I then received a series of emails for in person meetings or phone conference meetings. I was able to attend a couple of the phone calls but didn’t really have much input to offer. The only in person meeting I was able to attend was a meeting after our local user group meeting. It was a great one to go to because it was the one where we picked out all the sessions. It was a long process but I think we all were satisfied with the outcome. We had a solid slammed packed day planned.

After that, I helped out a bit by contacting the former Chicago SQL user group president, Bill Lescher and asking for some words of wisdom. He had some good advice about getting sponsors. I also contacted Pluralsight.com about sponsoring the event (which they did!!!)

So then some more emails and then finally the day came. Volunteers had to be there at 6am so that meant I had to wake up at 5am to get ready and drive up to the north side of Pittsburgh. Of course I had to stop on the way and pick up some caffeine in Monster form.

ready?????
ready!

 

My duties for the day were to be room proctor. I was in charge of getting the speakers what they needed, handing out and collecting evaluations, and making sure we stayed within the time limits. It was an easy job because I got to attend sessions at the same time.

So how were the sessions? GREAT! It was nice to spend a whole day listening to and learning about a technology that you find interesting.

I started off my day to a nice intro to Service Broker by Adam Belebczuk. Service broker can get a bit complicated if you have never used it before. I thought Adam did a great job cramming a ton of info into 1 hour. Great presentation skills as well (ZOOM IT!).

Next up was Dean Richards talking about VMware. It was another good introduction to VMWare. If you never used VMWare before then this was your course. It was a nice recap for me.

The final morning session was Mike Hiilwig‘s session of 7 SQL Agent jobs that I should be running. It was a very entertaining session that show some great methods for DB maintenance. Personally, I like to use powershell for monitoring errors over multiple servers but Mike’s ideas were also very good. I hope to see more sessions by him in the future.

LUNCH BREAK – The only take away I got from lunch was that college students don’t get dressed on weekends. The event was held at La Roche college. We didn’t eat until around 12:30. While I waited in line to get food with my other SQL Sat-ers, I noticed that every single college student I saw was still in their PJ’s or sweat pants. It was fall break though. ii-na.

After lunch I went to Justin Dearing‘s session on MongoDB. It was very cool to be exposed to noSQL. I’ve read about it before but have never seen it up close. It’s something I doubt I will ever use but now I feel I have a better understanding of it.

The next session was Michael John’s Top Development and Design Mistakes. It was a packed room for some antodotes of Michael’s experience in the field.

My final session of the day might have been my favorite. It was Sam Nasr’s 14 new SQl functions in 2012. It was exactly what the title says it was. We went over them 1 by 1. It was a great session that was more like a class. The whole group was engaged in the session. We actually had some time left over and explored these functions a bit more in depth.

After the sessions finished everybody filed back into the main hall for the giving out of the prizes and the final closing remarks.

FREE STUFF!!

After the day was over, some people headed to the bar for a couple SQL drinks. It was a nice way to finish off a great day. Overall, I enjoyed helping out (the little bit I did) and thought it was a great thing for the city of Pittsburgh. I hope we have more of these and the community is continues to support community training.

Thank you to everybody involved!!

 

 

 

One Delete at a Time – Powershell Slice

A co-worker approached me the other day asking for some help with a 3rd party tool. This tool was some sort of auditing software that kept creating trace files and then doing something else with the data.

My co-worker came to me because this tool was sucking up disk space by consistently rolling over 100mb trace files. The tool was supposed to delete theses files based on name, but since they were rolling over too quickly their naming convention was altered.

Normally, they were naming the files something like, “tracefile_1” and then the next should be, “tracefile_2”. Since they were rolling over too quickly they were being named, “tracefile_2-1” and “tracefile_2-2”. The delete job wasn’t setup to eliminate this kind of file. I dug around their code a bit to figure out what it did but instead of going through the whole process of altering and tool, filling out forms, and all that fun stuff, I just wrote a powershell script to delete the oldest file in the folder.

So here is my solution: I set it up to run every hour. It checks the trace file folder and deletes the oldest file one at a time until it reaches under 3gbs.

#put our folder we want to check here
$folder = "c:\temp2"
#now we need to see how big that folder is
$foldersize = (Get-ChildItem $folder | Measure-Object -property length -sum )
#and convert it to GB's
$GBsize = "{0:N2}" -f ($foldersize.sum/ 1GB)
 
#now, let's check to see if it's over 3 GBs
If ($GBsize -gt 3)
    #if it is, we want to DO the following
    {do
        #Let's get the 1st file (sorted by lastwrite time and remove it
        {dir $folder | sort lastwritetime | select -first 1 | remove-item -force
            #now let's recheck the folder size
            $foldersize = (Get-ChildItem $folder | Measure-Object -property length -sum )
            $GBsize = "{0:N2}" -f ($foldersize.sum/ 1GB)
            #print the folder size for testing
            $Gbsize
        }
        #is the folder less than 3gb? Yes, we are done. No, go back and delete another file
        until ($GBsize -lt 3)
        Write-Host "Deletes Done"
    }
 else {"No deletes Needed"}

Does anyone know a better way to do this?

I will count you all – Return row count for all tables

I’ve built some pretty rad SSIS packages to move massive amounts of data from an older ingres database system into SQL.

My package moves millions and millions of records in numerous amounts of tables in about 15-20 minutes.

When I was testing this sucker, I really wanted to monitor how many rows were getting inserted into each table. Since I am impatient and need a fresh slice of status all the time, I created this little diddy to return all row counts for all tables:

IF Object_id('tempdb..#count_table') IS NOT NULL
  DROP TABLE #count_table 
 
CREATE TABLE #count_table
  (
     TABLE_NAME VARCHAR(100),
     ROW_COUNT  INT
  ) 
 
INSERT INTO #count_table
EXEC sys.Sp_msforeachtable ' SELECT ''?'', count(*) from ?' 
 
SELECT *
FROM   #count_table
ORDER  BY ROW_COUNT DESC

Anyone know a better way?

UPDATE: I’ve tried another method below. This should should be faster

IF OBJECT_ID('tempdb..#sizeo') IS NOT NULL 
DROP TABLE #sizeo
 
CREATE TABLE #sizeo (
NAME VARCHAR(100),
ROWS INT,
reserved VARCHAR(50),
DATA VARCHAR(50),
index_size VARCHAR(50), 
unused VARCHAR(50)
)
 
EXEC sp_msforeachtable '
insert into #sizeo
EXEC sp_spaceused ''?''
'
SELECT name, ROWS FROM #sizeo

Code Camp New Zealand 2011

I was unable to attend TechEd this year but I did get a chance to hit up Code Camp. It was a one day free event with four different tracks of six sessions each. I attended all of the SQL sessions and learned some good stuff.

The first session was on Full Text Search by Kent Chenery (blog | twitter). It was a good overview on FTS. I’ve been studying a bit for some upcoming certification exams so reviewing CONTAINS vs FREETEXT as well as FORMSOF was great for me. I didn’t know about the 50 pre-installed filters, either.

Next was supposed to be a session about monitoring your server for $0.00, but speaker Dave Dustin couldn’t speak (literally, he lost his voice). We watched part of a powershell video from PASS 2010 instead.

Then came Steven Wang’s session on minimal logging and moving large chunks of data. He talked about the best way to update/delete large volumes of data by moving what you want to keep and truncating the rest. It was an interesting talk. I want to play around and try out some  more of his ideas. He also covered trace flag 610: which I never heard about.

Last but not least was Leo Miller‘s session on SQL security. He went over how to access SQL using only the builtin\admin group. I’ve used this before and I don’t think it’s that terrible. I was new at a job and no one knew the SA password for one SQL instance. He noted that this feature will be gone in Denali. I have mixed feelings about that. Miller did cover some really cool stuff on SQL injections, though. I never even thought about encrypting SQL commands in HEX, putting the command into a variable, and then executing that variable. He also mentioned putting sp_password in your injection to cover your tracks. Overall, this was my favorite presentation. I wish we had more time to go into detail and actually get to bust out our laptops and play around.

I really enjoyed Code Camp 2011. I can’t wait for the next one. As for Auckland, I think the next event we have coming up is SQLSaturday. Woohoo!

So much Internet…

SQL, SQL, SQL.

When it comes to the SQLs, I know only a fragment of what I could. To fill in those gaps, I read. Oh, do I read. I’ve only read a few SQL books but I have read hundreds of articles and blogs.

Most of the time it goes like this:

“Hmmmm, that looks like an interesting article. Let me give it a nice ‘ol read”

“Oh…what’s this link? To another blog? Sure! Why not?!”

time to reeeeaadddd

Then I must pull myself away and jump back into the databases.

I like blogs that stick to material I am looking for while also keeping my attention. At the same time, I try to avoid blogs that get too personal or complain too much. I keep my personal blog personal and my technical blog focused on the SQLs. I like to read blogs that try to do the same.

Here is my list of the Top 10 blogs I prefer (in no order):

  1. Grumpy Old DBA – the name says it all
  2. DBA Rant – good stories here
  3. SqlBalls
  4. SQL Rockstar
  5. Shaun J Stuart – my recent favorite. Not a ton of updates but great posts
  6. Pinal Dave – Used to be my favorite but there is such a thing as too much content – it’s hard to keep up
  7. Happy SysAdmin – Great for learning some Admin skillz
  8. Paul White – Wowzer. This guys knows a lot.
  9. sqlchicken
  10. SQL Princess – She loves some CTE’s
I like bloggers who I can relate to. Many times they have the same problems I have and/or present alternative solutions.

The last thing I would like to say to all these blogs is – thanks! You’ve helped me a lot.

Do you read blogs? If so, which ones?