FizzBuzz in SQL

Saw this article from Steve Jones about FizzBuzz in SQL. Figured I would try it.

Took me about 5 minutes to do. I did have a few small issues in my code at first though.

Here’s what I came up with:


declare @t table (
num varchar(10))

declare @i int

set @i = 1

while @i <= 100
begin
insert into @t
select @i

set @i = @i+1
end

select
case
when convert(int,num)%3 = 0 and convert(int,num)%5 = 0 then 'FizzBuzz'
when convert(int,num)%3 = 0 then 'Fizz'
when convert(int,num)%5 = 0 then 'Buzz'
else num
end
from @t

The first time I did this, I made num an int because I wanted just numbers in my numbers table. This would have proven to be more difficult in the conversion of a varchar to an int. The easier way was to do the conversion on case statement.

Is this the best way to do this? No idea. BUT…it works.

Tempdb Destroyer

One of my previous posts, I covered how to create a really big database with just garbage data in it. Recently, I was given a task to blow up tempdb. I thought about using a similar script and tried it out.

Of course it worked and it blew up TempDb but it also errored out. This wasn’t acceptable. I needed to find a way to blow up the tempdb with out it erroring out.

I tried a couple little things but ended up using the following script:

SET nocount ON
 
USE tempdb
 
IF OBJECT_ID('tempdb..#freespace') IS NOT NULL
DROP TABLE #freespace
IF OBJECT_ID('tempdb..#BigGuy1') IS NOT NULL
DROP TABLE #BigGuy1
IF OBJECT_ID('tempdb..#BigGuy2') IS NOT NULL
DROP TABLE #BigGuy2
IF OBJECT_ID('tempdb..#BigGuy3') IS NOT NULL
DROP TABLE #BigGuy3
IF OBJECT_ID('tempdb..#BigGuy4') IS NOT NULL
DROP TABLE #BigGuy4
IF OBJECT_ID('tempdb..#BigGuy5') IS NOT NULL
DROP TABLE #BigGuy5
GO
 
CREATE TABLE #BigGuy1 ( column1 CHAR(7100), column2 CHAR(900))
CREATE TABLE #BigGuy2 ( column1 CHAR(7100), column2 CHAR(900))
CREATE TABLE #BigGuy3 ( column1 CHAR(7100), column2 CHAR(900))
CREATE TABLE #BigGuy4 ( column1 CHAR(7100), column2 CHAR(900))
CREATE TABLE #BigGuy5 ( column1 CHAR(7100), column2 CHAR(900))
 
CREATE TABLE #freespace
( drive VARCHAR(1),
[MB Free] INT)
 
DECLARE @drive VARCHAR(1),
@free INT
 
SELECT @drive = LEFT(physical_name, 1)
FROM sys.master_files
WHERE database_id = DB_ID('tempdb')
AND TYPE = 0
 
INSERT INTO #freespace
EXEC master..xp_fixeddrives
 
SELECT @free = [MB Free]
FROM #freespace
WHERE drive = @drive
 
while @free &gt; 1
BEGIN
INSERT INTO #BigGuy1 VALUES('test', 'test')
INSERT INTO #BigGuy2 VALUES('test', 'test')
INSERT INTO #BigGuy3 VALUES('test', 'test')
INSERT INTO #BigGuy4 VALUES('test', 'test')
INSERT INTO #BigGuy5 VALUES('test', 'test')
INSERT INTO #BigGuy1 VALUES('test', 'test')
INSERT INTO #BigGuy2 VALUES('test', 'test')
INSERT INTO #BigGuy3 VALUES('test', 'test')
INSERT INTO #BigGuy4 VALUES('test', 'test')
INSERT INTO #BigGuy5 VALUES('test', 'test')
INSERT INTO #BigGuy1 VALUES('test', 'test')
INSERT INTO #BigGuy2 VALUES('test', 'test')
INSERT INTO #BigGuy3 VALUES('test', 'test')
INSERT INTO #BigGuy4 VALUES('test', 'test')
INSERT INTO #BigGuy5 VALUES('test', 'test')
INSERT INTO #BigGuy1 VALUES('test', 'test')
INSERT INTO #BigGuy2 VALUES('test', 'test')
INSERT INTO #BigGuy3 VALUES('test', 'test')
INSERT INTO #BigGuy4 VALUES('test', 'test')
INSERT INTO #BigGuy5 VALUES('test', 'test')
INSERT INTO #BigGuy1 VALUES('test', 'test')
INSERT INTO #BigGuy2 VALUES('test', 'test')
INSERT INTO #BigGuy3 VALUES('test', 'test')
INSERT INTO #BigGuy4 VALUES('test', 'test')
INSERT INTO #BigGuy5 VALUES('test', 'test')
 
DELETE #freespace
 
INSERT INTO #freespace
EXEC master..xp_fixeddrives
 
SELECT @free = [MB Free]
FROM #freespace
WHERE drive = @drive
 
print @free
 
END

This one uses a loop to check how much free space you have left. You can also set the threshold. For example, if you want to have 100MB free, you would just change @free > 1 to @free > 100.

With a word of caution, do NOT run this on your production system. We used this for a training session in a TEST environment.

The easy part about this one is that when you are finished expanding your tempdb, you can just restart your SQL services and it will reset the size.

Have fun!

 

Changing physical locations of all DBs

I got a request this week from an application engineer. The request was to move ALL physical database files from one physical drive to another physical drive on the same machine.

Hmmmm, I have never moved ALL .mdf files from one place to another at one time. One of my teammates told the application engineer that it may take some time. He want to detach/move/attach all the DBs.

I knew this wouldn’t work because:

1) it would take forever

2) how are you going to detach the master DB?

No Detach for YOU!

So my solution contained 3 parts:

  1. Change the location of all the files inside SQL
  2. shutdown and move the files
  3. Change SQL’s startup parameters for the master database

Easy as π.

I used some good old dynamic SQL to create an ALTER DATABASE script for each db (except the master)

--first, let's just take a look and see where our files are living
SELECT name, physical_name,
CASE
WHEN TYPE = 0 THEN 'DATA'
WHEN TYPE = 1 THEN 'LOG'
END AS FileType
FROM sys.master_Files
 
--Now let's create those ALTER DB statements
SELECT 'Alter DATABASE ' + d.name + ' MODIFY FILE ( NAME = ' + f.name + ', FILENAME = ' +
 
--here is where you want to put where you want to move your files
 
' ''c:\NewFolder\'+
right(physical_name, charindex('\',reverse(physical_name))-1 ) + ''' ) '
FROM sys.databases d
inner join sys.master_files f on d.database_id = f.database_id
where 
 --remove this if you want both DB AND log file
 type = 0 
--and keep this part, dbid 1 is master
 and d.database_id <> 1
 
--Check again to see if the physical name is changed
select name, physical_name,
case
when type = 0 then 'DATA'
when type = 1 then 'LOG'
end as FileType
from sys.master_Files

After you run this, you should see this message a bunch of times:

The file “tempdev” has been modified in the system catalog. The new path will be used the next time the database is started.

Now you can shut down your SQL services and move the physical files to their new location.

Once you have all the files moved over (including the master) you’ll need to change the startup parameters.

To do this:

  1. open up the configuration manager
  2. Right click on your SQL instance –> properties
  3. Click on advanced –> startup parameters
  4. Change the location of the master .mdf
Change the location here
Once you have updated your startup parameters, you should be good to go. I want to stress “should” here because you have to make sure that your service account has access to the new location. If it doesn’t, SQL won’t start.
I tired this a couple times on my test machine and then in the production environment. I was hoping to “wow” the customer with the speed of completing this task, unfortunately their disks were really slow and moving the files took the majority of the time.

Splitting IDs with commas, updating them, then slamming them back together!

Unfortunately, my company still uses MS Access DBs. That’s Access 97, folks! I’ve battled many challenges over the last couple months with these databases. It hasn’t been fun, but it’s been a great learning experience.

I had a fun adventure last week dealing with one of these DBs. We have a user access DB that connects to our SQL user DB. If HR updates something in their system, it also gets updated to the Access DB system. Then we have a job that updates the SQL DB with the Access DB. The job isn’t done in SQL, though. The dev team was given a task to update that job. The dev team that was lucky enough to get this task made a simple change that should have been a cakewalk.

The update was completed easily, but no one took consideration of the dependences. We didn’t even know we had dependences on this old job/db until the emails started rolling in.

Long story short, we found the problem and were able to update it. The bad part was fixing the busted data. We found out that this DB was tied to two other MS-Access DBs. D’OH!

From there I was given the task of fixing the data in the other two DBs a.s.a.p so people could work on their documents. I thought, “No big deal, update some old Access DBs”. Then I saw the data in the DBs.

The field I had to update was tied to the UserID in the User DB. This wouldn’t have been a problem except there were many cases where there was more than one UserID in that column.

Instead of a nice USERID = 123, I had loads of USERID = 123,234.

My game plan was to take all those multiple USERIDs, split them apart, update them, then slam them back together again. How the hell was I supposed to do this in Access?

Instead of making this a novel, I’ll just get to the juicy stuff. The main goal was to update old USERIDs with new USERIDs.

The first thing I did was acquire the latest and greatest ID and toss that dude into a temp table.

SELECT USERID, MAX(USERID) AS MAXUID INTO #max
FROM USERTABLE
WHERE USERID <> USERID
GROUP BY USERID

Next I needed to get all the columns that had commas, split them up, and put them into a temp table. For this I used a snippet I found here that used CTE and XML

WITH Cte AS
( SELECT
        TableID,
       CAST('' + REPLACE( USERID,  ',' , '') + '' AS XML) AS USERID
      FROM AccessDB...CommaTable    
 
)
SELECT
   TableID,
    Split.a.value('.', 'VARCHAR(100)') AS UID,
    0 AS updated INTO #splitter
FROM Cte
CROSS APPLY UID.nodes('/M') Split(a)

Next, I updated the temp table(#splitter) with the newest IDs

UPDATE  #splitter
SET UserID = m.maxuid,
updated = 1
FROM #splitter s INNER JOIN #newUID m ON s.UID = m.uid
WHERE tableid IN (
SELECT tableid
FROM #splitter
GROUP BY tableid
HAVING COUNT(tableid) > 1)

Finally, I had to smash those guys back together as they were before. I used another temp table to put them back to together

SELECT t.tableid,
      --Using the STUFF command
       STUFF(ISNULL((SELECT ', ' + x.USERID
                FROM #splitter x
               WHERE x.tableid = t.tableid
            GROUP BY x.USERID
            --AND using XML
             FOR XML PATH (''), TYPE).value('.','VARCHAR(max)'), ''), 1, 2, '') AS USERID INTO #updated
  FROM #splitter t
  WHERE updated = 1
GROUP BY tableid

And then I  updated the real table with the updated records

UPDATE AccessDB...CommaTable
SET USERID = u.USERID
FROM #updated u
      INNER JOIN AccessDB...CommaTable  p ON u.tableid =
p.tableid

It was a nice adventure and hopefully it’s something I can use again.
Have you been in this situation before? What did you do? What would you do differently?

How to make a Big Ugly Nasty Test Table

Do you need to fill up a database with terrible data to test restore times? Do you want to see how fast you can fill up your harddrive?

 

Well try out this little bit of code on your DB and watch the MB’s get sucked up!

CREATE TABLE BigGuy (column1 CHAR(8000))
CREATE TABLE BigGuy2 (column1 CHAR(8000))
CREATE TABLE BigGuy3 (column1 CHAR(8000))
CREATE TABLE BigGuy4 (column1 CHAR(8000))
CREATE TABLE BigGuy5 (column1 CHAR(8000))
DECLARE @counta INT = 1 while @counta <= 1000000000 
 BEGIN
INSERT INTO BigGuy VALUES('test')
 INSERT INTO BigGuy2 VALUES('test')
 INSERT INTO BigGuy3 VALUES('test') 
 INSERT INTO BigGuy4 VALUES('test') 
 INSERT INTO BigGuy5 VALUES('test') 
 SET @counta = @counta +1 
END

This should suck up about a GB a minute. Want to suck up more space? Add more tables and make sure your database is set to FULL recovery model.

WARNING! – This is NOT A GOOD IDEA and should ONLY be used in controlled environments FOR TESTING PURPOSES

Find…underlings…

Find…..underlings…..
searching……
underlings found……

I was recently given the task to create a stored procedure to find people who are under other people from our user’s database.

The requester wanted to put in the user’s ID and have it return everyone under them…and everyone under those people…and so on.

The table structure looked something like this:

<code>CREATE TABLE [dbo].[tblUser]( [UserID] [INT] NOT NULL, [EmployeeNum] [CHAR](7) NULL, [FirstName] [VARCHAR](20) NULL, [LastName] [VARCHAR](20) NULL, [StatusID] [INT] NULL, [ManagerEmployeeNum] [VARCHAR](20) NULL CONSTRAINT [PK_tblUser] PRIMARY KEY CLUSTERED ( [UserID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]) ON [PRIMARY] </code>

To complete this task I created a temp table, some counters, and a little while loop.

SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
-- =============================================
-- Author:		Adam Mikolaj
-- Create date: 23 June 2011
-- Description:	SP to find what people are under what people (mangers)
-- =============================================
CREATE PROCEDURE uSP_FindUnderlings
@UserID INT
 
AS
BEGIN
SET NOCOUNT ON;
 
CREATE TABLE #peopleunder(
employeenum VARCHAR(20))
 
DECLARE
@employeeNum VARCHAR(20),
@countA INT = 1, @countB INT = 0
 
SET @employeeNum = (SELECT rtrim(EmployeeNum) FROM tblUser WHERE UID = @UserID)
 
INSERT INTO #peopleunder
SELECT @employeeNum
 
while @countA &lt;&gt; @countB
	BEGIN
		SET @countA = (SELECT COUNT(*) FROM #peopleunder)
 
                INSERT INTO #peopleunder
		SELECT rtrim(EmployeeNum) AS employeenum FROM tblUser WHERE StatusID = 1
AND (ManagerEmployeeNum IN (SELECT EmployeeNum FROM #peopleunder)
AND EmployeeNum NOT IN (SELECT EmployeeNum FROM #peopleunder)) 
 
		SET @countB = (SELECT COUNT(*) FROM #peopleunder)
 
	END
SELECT * FROM tblUser WHERE EmployeeNum IN (SELECT EmployeeNum FROM #peopleunder)
AND UID &lt;&gt; @Uid AND StatusID = 1
DROP TABLE #peopleunder
END
GO

 

The basic logic behind this is using the loop to continue to insert records into the temp table until it doesn’t find anymore records. I use the counters to calculate the pre-insert and post-insert. If they match, the loop is finished.

Just a simple little SP to start my morning.

Recovery Model for Every Database

One thing that is enjoyable about my position is working with people who have no idea what SQL is besides “a difficult version of excel”. This gives me the chance to use what I have learned over the last couple of years and explain it the people who are using it in easy digestable….sandwiches, if you will.
One thing I see lot of is FULL recovery mode with no log backups. Sometimes these databases get tossed into prodution environments by third party tools that have full admin rights. Being I haven’t been here that long, I am still working on locking down all the production servers. So I needed a quick way to find which DBs were in SIMPLE and which DBs were in FULL.

So I ran into a very very cool function recently. DATABASEPROPERTYEX. If you ask this dude the status of a database, he’ll give it to you. Want to know the version? Yup, he can do that too. For the full list, check out – BOL.
I used this guy to tell me the recovery model of every DB on that server.

Check it out:

1
2
SELECT [name] AS dbname,Databasepropertyex([name], 'Recovery') AS recoverymodel
FROM MASTER.dbo.sysdatabases

Team that up with registered servers and you can find out the recovery model of every DB on all your servers!

Where the hell is that column???

“Hey Database dude, I need you to update the PDADE33e433 Table with values for columns GrAnDaMa12333 and WTF32323 with values ‘ASdjendnd’ and ‘BarfFartwekljweljk3′. Cool? Thanks.’

I work with engineers who use databases. They design all the products that my company sells. They are really good at designing but…they use databases. For them, I am a necessary evil. We all get along very well but they would rather be designing products than deal with storing information. Did I mention it’s one of me to about 300 of them?

With those odds it’s very hard to keep track of what they are doing in their databases. With supporting over 100 databases (possibly more, only on month 3) I am still finding it a hard time to try to find out how they are building these databases (that’s a whole different blog entry).

What I do know is that I get requests to change data that they don’t have access to. When this happens I have to figure out where this data is. Most of the time they tell me a server and a table. I always say “Sure, no problem, just make sure you send me an email with the details”

This is where my journey begins. To save time I have built a stored proc that I have been installing in all the master DBs of all the instances. This SP checks all columns in all databases to see if one is LIKE the keyword that I pass in. Feel free to use it. NOTE: it also gives you view names.

SET ansi_nulls ON
GO
SET quoted_identifier ON
GO
/*
-- =============================================
-- Author:    Adam Mikolaj-- Create date: 19 May 2011
-- Description:  Use this SP to search for a column in every database on your server.
-- I created this because sometimes the I have no idea what database a column I need is in. 
-- This quickly searches all your DBs
-- Sample - exec uSP_WheresThatColumn 'Birthdate'
-- =============================================
*/
 
CREATE PROCEDURE Usp_wheresthatcolumn
 
@COLUMN VARCHAR(255)AS
BEGIN
SET nocount ON;
 
CREATE TABLE #whereisit(
database_name VARCHAR(255),
TABLE_NAME VARCHAR(255),
column_name VARCHAR(255))
 
DECLARE @SQL VARCHAR(MAX)
SET @SQL =
 'insert into #whereisit
 exec sp_MSforeachdb  ''
use [?] select ''''?'''' as DATABASE_NAME , TABLE_NAME, COLUMN_NAME 
from INFORMATION_SCHEMA.COLUMNS
 where COLUMN_NAME like ''''%'+@COLUMN+'%'''' and TABLE_NAME not like ''''#%''''
 ''
  '
 
EXEC (@SQL)
SELECT * FROM #whereisit
DROP TABLE #whereisit
END
GO

To call it just type in master..uSP_WheresThatColumn ‘column name’

Enjoy!

Fixing Orphaned Users

One my favorite commands when I was started my career was Sp_changeuserslogin. I had to move a lot of databases around and I orphaned a lot of users. This command really helped me find what users were orphaned and reconnect them with the appropriate database.

When should you use this command?

This command is very useful when you have a user in your database and a login on your server that should be connected (or mapped) but are not.

To find out if you have an oprhaned users run:

EXEC Sp_change_users_login ‘report’

This will give you a list of orphaned users in your DB.

If you find you have some orphaned users you can run this command to fix them:

EXEC Sp_change_users_login
‘auto_fix’,
‘YourUser’

It’s a handy tool to use but BOL warns avoiding it because it will be going away soon. They suggest using ALTER USER instead.

ALTER USER YourUser WITH LOGIN = ‘YourUser’

Query to find Duplicate records in a column

We are going to start SQL sandwiches with a tasty little snack. I have had this asked in an interview before. “What is an easy way to find duplicate records in one table?”

The answer is quite easy yet it can stump anyone who hasn’t done it before.

It’s a simple select statement that you can use for multiple columns.

1
2
3
4
5
SELECT column1,
COUNT(column1)
FROM table1
GROUP BY column1
HAVING COUNT(column1) > 1

You can add other columns in there if you are looking for a record that has multiple columns that are the same.