SQL Sentry Plan Explorer and Deadlock XML Report

OK, I just found one of those things that I can remember reading about but forgot until I saw it again. Back in October I had the privilege of being the recipient of a SQL Sentry Plan Explorer Pro license via one of their contest:

If you are not familar with SQL Sentry, you should be. One of the products they are well known for in the SQL Server Community is Plan Explorer. One of the coolest monitoring products they also offer is Performance Advisor that can monitor just about anything you want with your SQL Server environment. One of those features includes the ability to access graphical deadlock analysis. That product includes the ability to export those deadlock graphs similar to the deadlock files you can obtain from SQL Server Profiler. Well one of the many features that exist within the Pro Version of Plan Explorer is the ability to graphically view those deadlock graphs. As some may know one of the more common things you troubleshoot in environments are deadlocks. Being able to easily review those graphs can save a lot of time and headache, if you are a graphical person, which I like pretty pictures.

Well I don’t have access to Performance Analzyer for every client I work with (although it would be on the wish list), but I can pull the XML deadlock report from the system_health session in SQL Server 2008 and above. I wondered if I could save that XML data out into a file and open it up in Plan Explorer Pro. So I forced a deadlock to occur on my local instance and pulled it out of the system_health event file. I saved that XML into a file with the XML extension and opened up Plan Explorer…viola:

DeadlockGraphicalView

How freaking cool!!!

Tidbit – SQL Server Maintenance Plan Logging

WindowWasherWhether you are an advocate of Maintenance Plans or not, they have their place in some situations. However, when you do use them there is one thing you should understand that is turned on by default with every maintenance plan you create: report and logging. I am not sure why, but Microsoft decided that any sub-plan that exist within a maintenance plan should have a log created for it by date.

I have lost count of how many times I come across servers and review their default log directory to find thousands of small text files from years back of maintenance plan logs. How large the files are is based on what actions are being performed within the sub-plan of the maintenance plan.

You will generally find the log directory with files that are similar to this:

MaintenancePlanReportFiles1

Now to see how this is configured if you open up a maintenance plan you will find a button on the top of that window pane to open up the reporting setup:

MaintenancePlan_ReportLoggingSetup

MaintenancePlan_ReportLoggingSetup2

I will on every occasion simply uncheck the box for “Generate a test file report” because I do not need hundreds of files created that are simply going to show “successful” attempts:

MaintenancePlanReportFile

If my maintenance plan is failing I will just use the output file that is configured on a SQL Agent job step, under the Advanced page. There, by default, it will do the one thing you can’t have the report and logging in maintenance plan do: overwrite my log file each time. I only need one log file to check in the event something failed.

MaintenancePlan_SQLAgentStepOutputConfig

FileZilla Logs to SQL Server via PowerShell

gI had a client that uses FileZilla for FTP, where little files are constantly being sent to this server from numerous devices. In this situation I needed to be able to have it log those connections so I could try and track down an issue. [I’m a man with many hats.] So I went into FileZilla Server Interface and enabled logs to be created by day. I thought I might be able to just parse them easily with PowerShell, but yeah not really.

Through a bit of searching I found a few articles that talked about how ugly the logs for FileZilla actually are and found one that utilized LogParser to change them into W3C formatted files. Which I decided to go a step further and just put them into a SQL Server table.

My intention was to have a SQL Agent job, with a PowerShell step, that would look at the log file directory and import the log for that day into a table created for that day. I am not fond of putting dates into a table name, but in this situation I did not particularly want to put much effort into setting this up. I spent an hour or so troubleshooting what turned out to be this “log_2014-11-14″ needed to actually be “[log_2014-11-14]” (with the brackets) in the query for LogParser, mostly because of the stupid error message being returned in PowerShell.

The complete code I use in the job step is below. I have had this running for a few days now and it works well enough. I then just went through and created a few procedures to pull out the information I wanted, using a parameter to just pass in the date of the table I wanted to pull.

$erroractionpreference = "Stop";
[string]$dbserver = "MyServer"
[string]$dbname = "MyDatabase"
<#
References used to build this script:

http://strivinglife.com/words/Post/Parse-FileZilla-Server-logs-with-Log-Parser


http://www.technologytoolbox.com/blog/jjameson/archive/2012/02/29/import-website-iis-logs-into-sql-server-using-log-parser.aspx


https://www.simple-talk.com/sql/sql-tools/microsofts-log-parser-utility-swell-etl/

#>
[string]$logParser = "${env:ProgramFiles(x86)}" + "\Log Parser 2.2\LogParser.exe"

$dateFile = [DateTime]::Now.AddDays(-1) | Get-Date -Format yyyy-MM-dd;
$infile = "${env:ProgramFiles(x86)}" + "\FileZilla Server\Logs\fzs-" + $dateFile + ".log"
$insertTable = "log_" + $dateFile

$createTable = @"
CREATE TABLE [$insertTable]
(
	RequestNumber varchar(10) NOT NULL,
	DateTime datetime NOT NULL,
	[User] varchar(20) NOT NULL,
	IpAddress varchar(15) NOT NULL,
	Request nvarchar(500) NOT NULL
);
GO
CREATE CLUSTERED INDEX [ci_$insertTable_requestnumber] ON [dbo].[$insertTable]([RequestNumber]);
GO
"@

try { Invoke-SqlCmd -ServerInstance $dbserver -Database $dbname -query $createTable; }
catch
{
	$errText = $error[0].ToString()
	$errText
};

$qry = @"
SELECT
  SUBSTR(Text, 1, SUB(INDEX_OF(Text, ')'), 1)) AS RequestNumber
  , TO_TIMESTAMP(
   TRIM(
    SUBSTR(
     Text
     , ADD(INDEX_OF(Text, ')'), 1)
     , SUB(INDEX_OF(Text, '-'), ADD(INDEX_OF(Text, ')'), 4))
    )
   )
   , 'M/d/yyyy?H:mm:ss'
  ) AS DateTime
  --, TRIM(SUBSTR(Text, ADD(INDEX_OF(Text, '-'), 1), SUB(INDEX_OF(Text, '>'), ADD(INDEX_OF(Text, '-'), 1))))
  , TRIM(SUBSTR(
   TRIM(SUBSTR(Text, ADD(INDEX_OF(Text, '-'), 1), SUB(INDEX_OF(Text, '>'), ADD(INDEX_OF(Text, '-'), 1))))
   , 0
   , LAST_INDEX_OF(
    TRIM(SUBSTR(Text, ADD(INDEX_OF(Text, '-'), 1), SUB(INDEX_OF(Text, '>'), ADD(INDEX_OF(Text, '-'), 1))))
    , '('
   )
  )) AS User
  , SUBSTR(
   TRIM(SUBSTR(Text, ADD(INDEX_OF(Text, '-'), 1), SUB(INDEX_OF(Text, '>'), ADD(INDEX_OF(Text, '-'), 1))))
   , ADD(LAST_INDEX_OF(
    TRIM(SUBSTR(Text, ADD(INDEX_OF(Text, '-'), 1), SUB(INDEX_OF(Text, '>'), ADD(INDEX_OF(Text, '-'), 1))))
    , '('
   ), 1)
   , SUB(LAST_INDEX_OF(
    TRIM(SUBSTR(Text, ADD(INDEX_OF(Text, '-'), 1), SUB(INDEX_OF(Text, '>'), ADD(INDEX_OF(Text, '-'), 1))))
    , ')'
   ), ADD(LAST_INDEX_OF(
    TRIM(SUBSTR(Text, ADD(INDEX_OF(Text, '-'), 1), SUB(INDEX_OF(Text, '>'), ADD(INDEX_OF(Text, '-'), 1))))
    , '('
   ), 1))
  ) AS IpAddress
  , SUBSTR(Text, ADD(INDEX_OF(Text, '>'), 2), SUB(STRLEN(Text), INDEX_OF(Text, '>'))) AS Request
INTO [$insertTable]
FROM '$inFile'
 WHERE Text LIKE '(%'
  AND Request NOT LIKE '221 %'
  AND Request NOT LIKE 'QUIT%'
"@

[string]$cnString = "Driver={SQL Server Native Client 11.0};Server=$dbserver;Database=$dbname;Trusted_Connection=yes;"

[string[]]$parameters = @()
$parameters += $qry
$parameters += "-i:TEXTLINE"
$parameters += "-o:SQL"
$parameters += "-oConnString:$cnString"
$parameters += "-stats:OFF"

& $logParser $parameters

SQL Server GROUP BY in PowerShell

I usually don’t have any specific reason to play with PowerShell other than strict joy of doing it, but this time I did. I wanted to go through a client’s server, and find out how many tables each database had.

Now something like this can easily be done in T-SQL for a single database with this bit of code:

SELECT COUNT(*) AS TotalCount
FROM sys.all_objects
WHERE is_ms_shipped = 0 AND type='U'

The above works but then you have to write the additional code to execute that against every database on a given server. I prefer to use PowerShell one-liners to get things like this done, at least when I can.

In this instance I figured out how I can by introducing you to the Group-Object cmdlet. This cmdlet can have a property passed to it and will provide a count based on the number of objects it finds based on that property. So what is done above can easily be accomplished using these two lines (ok isn’t a one-liner):

Import-Module SQLPS -DisableNameChecking
$s = New-Object Microsoft.SqlServer.Management.Smo.Server ORKO
($s.Databases).Tables | Group-Object Parent | select Count, Name

The above will output the below against my local instance of SQL Server 2012:

GroupObject

Max User Connections set to one (oops)

I frequent Database Administrators forum over on Stack Exchange Network, and came across a question that intrigued me enough to play around with the setting noted in the first sentence: Maximum number of concurrent connections. What happens when you set this to one, and how do you set it back to default?

To make a small note the method or process this OP tried to use just to get a database restored was completely the wrong way to go about it. You should not be making changes to server-level configurations without knowing the full affect they can have, especially if you are on a production instance.

Now, if you read the documentation on this setting the main sentence that should grab you:

The user connections option specifies the maximum number of simultaneous user connections that are allowed on an instance of SQL Server.

The default value of zero (0) means unlimited, so it goes without warning that if you set this to anything above zero you are limiting how many connections you can have. I would say that setting this to one is equivalent to setting the instance to single-user mode.

I set this to one on my local SQL Server 2012 instance and restarted the instance; because any current connections are not going to be closed automatically by setting this value. Once I tried to connect back to my instance I receive this error:

User Connections Error

Now what do you do to get access again? The most voted answer to the question states try using the DAC to connect. Alright, let us see what happens:

SSMS DAC attempt

I will note above is clicking on “New Query” in SSMS, because the initial connection prompt you get when opening SSMS does not support a DAC connection. In doing this it let me connect, but it is not guaranteed in every situation. So what is another method you ask? PowerShell, I answer.

You can make modifications to SQL Server configurations using SMO and PowerShell. The example provided actually is what pointed me to these lines of code to fix my predicament:

Import-Module SQLPS
$srv = New-Object Microsoft.SqlServer.Management.Smo.Server ORKO
$srv.Configuration.UserConnections.ConfigValue = 1
$srv.Configuration.Alter()

Executed the above and restarted the instance and I was back in business. Now I will stipulate that I executed the above code immediately after the instance was restarted so I ensured I grabbed the only connection allowed. If you have other applications or systems that are hitting your instance you may need to just try the DAC. If the DAC is disabled in your environment (some security standards want to see this disabled) then you will need to do a bit more work to grab that single connection.

I am Biml (Bacon Integrated Markup Language)

Wouldn’t it be awesome if that is what it was really called? Maybe not, so what does Biml really stand for?

BIMLLogo

Small Background

Varigence is the company that created Biml, particularly Scott Currie, back in 2008. You can check their page to get the full store, it is worth a read. It is a pretty remarkable story that they took a thought and turned it into a full technology that includes a proprietary and an open source product. I can remember hearing about it within the past year or so but it never clicked with me at the time (it does now). I think around 2012 it started to get an uptick in exposure. It has begun showing up in sessions at PASS Summit and at SQL Saturdays in some places more and more in the past year.

Now BimlScript is the use of C# (by default) code within your Biml that offers an enormous amount of flexibility into what you can do. This is where you can get into dynamically creating things within an SSIS projects or even SSAS projects.

The Potential

I think back to a contract job I had with the Army handling ETL design. Through FOA request I was responsible for making packages that would pull data out of the source and dump it into an Access Database or Excel spreadsheet. The design pattern I had to used there was creating a package per table of data that needed to be pulled and then a master package that would simply iterate through all of those “table” packages.

We used AGILE development process so one request would generally take me a few days to build out the project, deploy it into test, then staging, and then finally production. So depending on what amount of data was being requested it could take up to a week to get everything designed and finalized to send the file to the requestor.

If I could have used Biml for this process I would have cut down the initial design time considerably. With it providing me the ability using C# (which I would have had to learn as well back then) to iterate over the table list of a given source to generate all the Data Flow tasks; man think of the clicks you do just setting up one data flow task and how much time this would have saved.

Biml Products

The most common product referenced in the community, at least that I have seen, with Biml is BidsHelper. Which BimlScript is one part of what is included in BidsHelper, there are a ton of little things that were added in there that creators of BIDS left off. If you are using SSDT-BI now a few of those things included may be obsolete possibly now. You can look at the list under documentation of what is included. I actually listed some of those I find worthy of noting even with SSDT-BI on my Tools of the Trade page.

BidsHelper is something that is used with Visual Studio (BIDS or SSDT-BI) and is an excellent starting point to get started using Biml. If you find you need to go to the next step Varigence has an IDE they created called Mist that provides a very robust environment to develop SSIS and SSAS. I link to a few videos below that show off what this product can do, very cool stuff! The other product they have is Vivid, this is an add-on for Excel users that handle data analysis.

Resources to Note

When I wanted to learn more about Biml I found myself asking, “where do I start?”. Similar to how a lot of folks ask when they want to go into being a DBA or Database Developer. Below are just some links to articles and videos that can help you get started.

Articles:

SQLServerCentral’s Stairway to Biml (yes, you should register…just do it)

Bill Fellow’s Blog post on Biml (some good nuggets if you learn by seeing)

BimlScript Walkthroughs, Getting Started

Biml Language Reference (similar to MSDN for SQL Server)

Biml Snippets (ton of examples to learn from here)

Videos that I found the most helpful:

SQLug.se – Peter Hansen’s – Biml, session 1

SQLug.se – Scott Currie’s – Biml, session 2

Using BIML as an SSIS Design Patterns Engine (PASS DW/BI Virtual Chapter) (Andy Leonard does a great walkthrough of Biml)

Biml Introduction and Fundamentals Webinar (Peter Avenant with Varigence, great video showing Mist and Biml, you should subscribe to their channel)

Working with BimlScript to ease and automate your SSIS development (Session done by Jeff Mlakar, just this month. Some excellent tips and examples)

Powershell: Backup Files to Tape

I had a maintenance plan on a server with a client that was having issues running successfully each night because it ended up filling the drive. This as it turns out ended up being that the clean up task was set to delete backups older than 1 day. Ok you would think that was sufficient, however it appears in this instance that was not working. The time stamps showing for the previous day backup was within a minute of the next run of the maintenance plan. So I took it to mean that it was not actually seeing them as over a day old and therefore skipped them. I decided to change it to 21 hours instead of one day and that seemed to work for this client.

Either way I needed to delete the old backups that I know were already backed up to tape. To do this I wrote up this function that will simply return a bit of information on the file like name, last access, last write, and attribute values. The attribute will show “Normal” if the file has been backed up (or the archive bit cleared) and “Archive” if it still has to be backed up.

Now I would caution you to do your own verification as to whether the file actually exist on your backup media (tape, disk, etc.). There are multiple things that can modify the archive bit, so use at your own discretion.

function Verify-FileBackedUp ($rootdir,[switch]$recurse)
{
  # if value "archive" is shown means file has not been backed up
  # if value "normal" is shown means file has been backed up, or the archive bit cleared 
 if ($recurse)
 {
  dir $rootdir -recurse | where {-not $_.PSIsContainer} | 
  select FullName, LastAccessTime, LastWriteTime, 
   @{Label="FileBackedUp";Expression={(Get-ItemProperty $_.FullName).Attributes}}
 }
 else
 {
  dir $rootdir | where {-not $_.PSIsContainer} | 
  select FullName, LastAccessTime, LastWriteTime, 
   @{Label="FileBackedUp";Expression={(Get-ItemProperty$_.FullName).Attributes}}	
 }
}

A sample output would look something like below:

capture

Follow me on Twitter

Follow

Get every new post delivered to your Inbox.

Join 52 other followers