Monitor your SQL Server on the cheap with SQL Sever Reporting Services.
There are a lot of very nice SQL Server performance monitoring systems on the market, and they come in all kinds of shapes and sizes. Usually the level of functionality is directly proportional to the price. Most of these systems will allow you to track the necessary performance data to effectively monitor your systems but can cost a lot of money.
Very often, DBAs are limited in the tools that they can implement. They may not have the hardware to support Commercial Off the Shelf (COTS) performance monitoring. Once installed, these systems will inevitably begin to eat up resources. Processor resources utilized by these systems can be significant. These tools can also be inflexible and bring up a lot of information that you really don’t need for your normal day to day operations. Another, frequently more important, consideration are issues related to installing software on your servers. Hardware or procedural constraints can make even affordable software unappealing. Many environments require some minimal configuration and change management for COTS software. The effort required to implement software can surpass the benefits in many cases. Finally the cost of these systems can be prohibitive for many small and mediums sized businesses. Many decision makers don’t understand the metrics, and consequently they are loathe to spend money to monitor them.
Do you mean “constrained by” as in limited, or “constrained in” as in few choices?
Thankfully, there is an inexpensive alternative to COTS performance monitoring. Microsoft operating systems capture performance data and make this information readily available to users or developers in the Microsoft Management Console – Performance Application. The perfmon.exe application allows access to thousands of different metrics for you to monitor. Moreover, these metrics can be piped to data tables in SQL Server.
This article will explain the steps necessary to build your very own SQL Server Hardware Performance monitoring system with tools that would normally be accessible to most DBAs and Developers. Setup costs are minimal (a few hours of effort) and the resulting product is both flexible and highly customizable. I will also cover some techniques on analyzing the data. A good analysis of the monitoring data can help with troubleshooting and be used for planning future system hardware upgrades.
Before we get started, we need to go over the system requirements for creating the Hardware Performance Monitoring System:
Operating System: Windows XP Professional SP2 or Windows 2003 Server (for creating the Counter Log in perfmon.exe)
Database: SQL Server 2000 (Developer, Standard or Enterprise Edition) SP3a (Windows 2000 or Windows 2000/2003 Server)
Reports: SQL Server Reporting Services SP1 (Windows 2000 SP4, Windows XP SP1, Windows 2000\2003 Server
Development: Microsoft Visual Studio® .NET 2003 (any edition) or the standard edition of Visual Basic® .NET (for Reporting Services report development)
After you verify that you have met the system requirements listed above, we can begin building the Performance Monitoring System.
Is this VS 2003? Or plain VS.net Also does this work with RS 2005 or only 2k? Lastly, is everything listed below run on an XP machine? Is there any consideration for having an RS instance already running on 2k3?
Extracting and reporting performance information is a straight-forward process as depicted in Figure 1 below.
The steps for creating the system are as follows:
1. Access perfmon.exe data by creating a System Data Source Name (DSN)
2. Identify the metrics that you want to capture in perfmon.exe and create a new Counter log that will pipe data to the SQL Server
3. Create the database to receive data from the perfmon application
4. Build reports in SQL Server Reporting Services report to graphically represent the data
5. Analyze the data.
Create the System Data Source Name
In order for the Counter Log data to be sent to the database, you will need to create an ODBC connection. ODBC connections are made by creating a DSN which stands for Data Source Name. Before you create the System DSN to connect to the database, you will need to create a database to store your Counter Log data. You can place this data into any database you desire but it is recommended that you create a new database to provide one central location for all of your performance data. For the sake of this article, I have issued the following script in Query Analyzer to create the performance monitoring database.
CREATE DATABASE dbSQLPerfMon.
Now that you have a repository for your data, you are ready to create your DSN. To create your DSN
1. Go to the Start Menu and open the Control Panel
2. Open Administrative Tools
3. Open Data Sources (ODBC).
4. Go to the System DSN tab and press the Add… button.
5. Select the SQL Server database driver and click Finish. Name the DSN: dsnSQLPerfMon.
6. Select the SQL Server you will be connecting to and then hit Next.
7. Ensure that the With Windows NT… radio button is selected and that the Connect to SQL Server to obtain … check box is checked. Click Next.
8. Change the default database to dbSQLPerfMon or the database where you would like to store the data.
9. Select Next and then Finish and finally OK.
You now have a DSN that will provide connectivity from your Counter Log to your SQL Server database. Now you need to create a Counter Log to populate your database.
You mention XP as the OS, but you cannot install all versions of SQL on XP. Is this DSN created on a client computer or the SQL Server computer storing the perf db? The perfmon.exe needs to be created with Windows XP or Windows 2003 Server as noted in the updated requirement above.
Create the Counter Log in perfmon.exe
Perfmon.exe is a Windows performance monitoring tool that comes standard with the Windows Operating system. It can be configured to provide detailed monitoring of the utilization of the operating system resources. The portion of the perfmon.exe utility we will focus on is the Counter Logs. Counter Logs are named instances of performance counters. For example, assume that you want to collect performance data about your processors. All you need to do is select all of the processor performance counters that you wanted to review and save the Counter Log as MyProccessorMonitor. This named instance (MyProcessorMonitor) of performance counters will only supply performance data about your processors.
To create a new Counter log:
1. Go to the Start Button and select Run.
2. In the Run window type perfmon and click OK. This will open a Microsoft Management Console that will allow you to select the counters you will need for monitoring your system resources.
3. Select the Performance Logs and Alerts item on the menu tree, right click on Counter Logs and click on New Log Settings.
4. In the New Log Settings window, type SQLPerfCounter1 (as shown in figure 2 below) and click OK.
5. Click on the Add Counters... button and select the following counters from the table below. Note that there are hundreds of performance counters to select from. I have included only these four counters to give you a good start in creating your very own personalized performance monitoring system. You can add as many counters as you like. These counters will monitor your CPU, Memory and Physical Disk subsystems.
% Processor Time
% Processor Time is the percentage of elapsed time that the processor spends to execute a non-Idle thread. Ensure _Total is selected so overall CPU utilization is captured.
Pages/sec is the rate at which pages are read from or written to disk to resolve hard page faults.
% Disk Time
% Disk Time is the percentage of elapsed time that the selected disk drive was busy servicing read or write requests.
* Disk performance counters are permanently enabled on systems beyond Windows 2000.
*If you are trying to run this on Windows 2000 or an earlier version of Windows, then you must issue the diskperf –y at the command prompt and restart your system. If you do not initialize the disk counter with this command, your disk counter values will be 0.
MSSQL$YourServer: Buffer Manager
Buffer cache hit ratio
Percentage of pages that were found in the buffer pool without having to incur a read from disk.
- After you have selected the four Counters from the table above, change the Interval to 15 minutes. I have found that this time interval is small enough to provide useful performance data for analysis. Any smaller and you may get more data than you need, any larger and you may not get a good picture of how your system is performing.
- Depending on your security configuration, you may need to put your user name and password in the Run As: field.
- Go to the Log Files tab. This window will allow you to determine what type of output the Counter log will provide.
- In the Log file type: drop down menu, select SQL Database and then select Configure… This will bring up the Configure SQL Logs window for selecting your DSN that will connect you to your specific database.
- Select the DSN called dsnSQLPerfmon.
- Click OK twice to complete your Counter Log and viola, you have a Counter Log that is ready to connect and log data to your performance monitoring database.
- Finally, to initialize your new Counter Log, right click on it and click Start.
Your SQLPerfCounter1 icon should change from red to green as shown in figure 3. Green means that your Counter Log is connected to the SQL Server database and that it is sending performance counter data at intervals of every 15 minutes.
On the first start of the Counter Log, the perfmon.exe utility will create three new tables in your performance monitoring database. These tables will be the central repository for all of your monitoring data. The tables and their descriptions are as follows:
The CounterData table contains a row for each counter that is collected at a particular time. There can be a large number of these rows.
The CounterDetails table describes a specific counter on a particular computer or node
The DisplayToID table establishes a relation between the user-friendly string displayed by the System Monitor to the GUID stored in the other tables.
The Entity Relationship (ER) Diagram for the Counter Log (Figure 4) shows that the CounterID forms a Primary Key\Foreign Key relationship between the CounterDetails and CounterData tables. The CounterID columns will be used to join these two tables.
Creating the SQL Server view to format the data for the reports
Now that you have your Counter Log periodically inserting counter data into your monitoring database, you can begin putting together the SQL Server view that will format the data for the reports. Using both the CounterData and CounterDetails tables, the script in Listing 1 formats the Counter Log data for the report.
CREATE VIEW vSQLPerfMon AS (
DESCRIPTION: Format the Counter Log data to be used in the reports.
AUTHOR: Shawn Calderon
DATE WRITTEN: September 1, 2005
CASE WHEN CounterDetails.CounterName = '% Processor Time'
THEN '% Processor Time - Consistent values over 85% suggests a processor bottleneck'
WHEN CounterDetails.CounterName = 'Pages/sec'
THEN 'Pages/sec - Extended periods of 20 or greater suggests a memory bottleneck'
WHEN CounterDetails.CounterName = 'Buffer cache hit ratio'
THEN 'Buffer cache hit ratio - dipping below 85-95% suggests a memory bottleneck'
WHEN CounterDetails.CounterName = '% Disk Time'
THEN '%Disk Time - Consistent periods exceeding 90% suggests an I/O bottleneck'
ELSE CounterDetails.CounterName END AS CounterName,
CAST(LEFT(CounterData.CounterDateTime, 19) AS smalldatetime) CounterDate,
DATEPART(YEAR, CAST(LEFT(CounterData.CounterDateTime, 19) AS smalldatetime)) AS CounterYear,
DATENAME(MONTH, CAST(LEFT(CounterData.CounterDateTime, 19) AS smalldatetime)) AS CounterMonth,
DATENAME(WEEK, CAST(LEFT(CounterData.CounterDateTime, 19) AS smalldatetime)) AS CounterWeek,
DATENAME(WEEKDAY, CAST(LEFT(CounterData.CounterDateTime, 19) AS smalldatetime)) AS CounterWeekDay,
DATENAME(DAY, CAST(LEFT(CounterData.CounterDateTime, 19) AS smalldatetime)) AS CounterDay,
CAST(CAST(DATEPART(HOUR,CAST(LEFT(CounterData.CounterDateTime, 19) AS smalldatetime)) AS varchar) +
CAST(CASE WHEN datepart(MINUTE,(cast(LEFT(CounterDateTime, 19) AS smalldatetime))) BETWEEN 1 AND 15
WHEN datepart(minute,(cast(LEFT(CounterDateTime, 19) AS smalldatetime))) BETWEEN 16 AND 30
WHEN datepart(minute,(cast(LEFT(CounterDateTime, 19) AS smalldatetime))) BETWEEN 31 AND 45
ELSE 59 END AS varchar) AS int) AS CounterTime,
CounterDetails.CounterID = CounterData.CounterID
CAST(CAST(DATEPART(hour,CAST(LEFT(CounterData.CounterDateTime, 19) AS smalldatetime)) AS varchar) +
CAST(CASE WHEN datepart(minute,(cast(LEFT(CounterDateTime, 19) AS smalldatetime))) BETWEEN 1 AND 15
WHEN datepart(minute,(cast(LEFT(CounterDateTime, 19) AS smalldatetime))) BETWEEN 16 AND 30
WHEN datepart(minute,(cast(LEFT(CounterDateTime, 19) AS smalldatetime))) BETWEEN 31 AND 45
ELSE 59 END AS varchar) AS int) BETWEEN 659 AND 1901
Might need to remove this for space in the final version. Keep that in mind for the prose. I will defer to your expertise. Just let me know what you would like me to do.
I use a view because it simplifies the code used in the report and allows for an added level of server side security i.e. you can limit who has access to the view through a role. Notice that in my SELECT clause, I have written CASE statements to make the CounterName output more descriptive. This will help the user to determine what the limiters for a particular counter are and whether or not there is a possible bottleneck. You will also notice that the CounterDateTime field is not a datetime data type but rather a char data type. This script extracts the first 19 characters of the CounterDateTime field and then coverts it to a smalldatetime. It then breaks this field into years, months, weekdays, days and hours. The ROUND function is used to help make the Counter values easier to report as it will limit the number of numeric digits displayed. For the CounterTime field, a CASE statement is used to place the actual times into consistent 15 minute timeframes. Again this is done to make the report time display both consistent and intuitive. In the WHERE clause, you will find that the output is limited to display records that occur between 7:00 AM and 7:00 PM. I like this time range because it allows me to see how my server is performing during normal working hours. This clause is optional so if you want to get a complete 24 hour picture of the system usage then I recommend that you leave this part of the statement out.
Creating the Daily Performance Monitor Report.
Now that we have our new SQL view, let’s build the report. Launch Visual Studio .Net and create a new project called SQL Performance Monitor. Ensure that the Business Intelligence project folder and the Report Project wizard are selected. Click OK. Click Next after the Report Wizard window comes up. In the Select data source window, name the new data source name: dbSQLPerfMon and click on the Edit button. Select or enter a server name and click on the Use Windows NT integrated security radio button as shown if Figure 5 Select the database on the server and click OK to return to the Report Wizard window.
Click Next and copy the following code into the Query String Window as in Figure 6.
DESCRIPTION: List Counter Log data for the Performance Monitor Report
AUTHOR: Shawn Calderon
DATE WRITTEN: September 1, 2005
WHERE CounterYear = @CounterYear
AND CounterMonth = @CounterMonth
AND CounterDay = @CounterDay
ORDER BY MachineName,ObjectName,CounterName,CounterDate
Query String Window
Click Next. Ensure Tabular is selected and click Next. Select MachineName and press the Page > button. Select ObjectName and press the Group> button. Finally, select CounterName and press the Details> button and click Next as shown in Figure 7.
To add a chart, go to the Toolbox tab in the left lower part of the design pane and select Chart and click on the bottom Group row. A Chart should appear in this group as seen in Figure 8 below.
Click on the Fields tab located in the lower left part of the design pane. Drag and drop the CounterName, CounterValue and CounterTime field into the Drop series fields here box. Right click on the CounterTime field select Properties. Ensure that the Click on the Sorting tab and ensure that the Expression field under the sorting tab reads “=Fields!CounterTime.Value” selected.
Again, right click on the Chart and select Properties. When the Chart Properties window pops up, go to the Palette: field and choose Semi-Transparent. Click on the Legend tab and select the button in the bottom middle position and click OK.
Go to the top of the screen and select the Report menu item and click on Report Parameters. You should see report parameters for CounterYear, CounterMonth and CounterDay. In the CounterYear parameter type Year: into the Prompt field. In the Available values: section add the list of years in both the Label and Value fields for the Year Parameter. Similarly add the lists of months and days to Month: and Day: respectively (as shown in Figure 9 below.
Figure 10 is an example of the kind of report that you can produce.
A little more about Counters
Processor: % Processor Time is a comprehensive indicator of processor activity.
If % Processor Time exceeds 85% for prolonged periods of time then you should consider the following:
- Find the application that is causing the excessive load on the system. In an environment where ad-hoc query tools are prevalent, poorly written queries can cause processor usage to spike leaving little left for competing processes. When ever I get a user complaint that the database application is running sluggish, the first place I look is how my processors are being utilized. Whether it is a database application or an ad-hoc query, the steps are the same. Localize the resource intensive process and look for ways to tune it. Sometimes just adding an index can mean the difference between queries that take a long time to run to having them run in under a second.
- Add CPUs. In a recent systems upgrade, we purchased a server with 4 Processor slots but only filled two. We did this with the hope of adding processors later when the load requirements surpassed the current configuration. If you do not have the option of adding additional CPUs then get faster CPUs. You also may want to consider putting in CPUs with more cache. The larger the processor cache, the more transactions that can be handled internally by the CPU and the better the performance.
- Go 64 bit. Many administrators run SQL Server 2000 in standard 32 bit mode which is the default. However, many systems have the option of upgrading to SQL Server 2000 64 bit. Because Microsoft has come out with SQL Server 2000 Service pack 4, you can take full advantage of the 64 bit capabilities on either an AMD Opteron server or the more expensive Itanium systems. Using the 64 bit functionality greatly increases the processing power by allowing the CPUs to process data in larger chunks and increasing the amount of memory it can address. This means your server can have more RAM, and therefore can store more data in memory, making memory-intensive queries run faster.
PhysicalDisk: % Disk Time lets you know how busy your disk arrays are. It can also show you which ones are working harder than others.
If % Disk Time is >90% for prolonged periods of time then you may want to consider adding drives to your Physical disk subsystem. The more drives you add, the more drive heads you have working for your read/write operations. You can also try replacing your 5200 and 7800 RPM drives with 10,000 and 15,000 RPM drives to greatly increase the through put of megabytes per second. Also, consider configuring your disk arrays for faster RAID levels. RAID 10 is the fastest for all of your files except transaction logs. Since the transaction log writes are sequential, a RAID 1 works well and will save on the cost of additional drives. Below is an example of an optimal RAID configuration to support an efficient SQL Server file placement strategy:
- SQL Server executables: RAID 1
- tempdb: RAID 10
- Data files i.e. mdf and ndf files: RAID 10
- Transaction Logs i.e. .ldf files: RAID 1
Add more physical RAM to your system to allow for more processes to be handled in memory, and therefore freeing up your disk subsystem.
Memory: Pages/sec tracks how many pages per second are being processed on disk. As disk I/O is much slower than RAM, excessive paging can be a huge performance hit. If Pages\sec is > 20 for long periods of time, consider letting SQL server dynamically allocate memory. This will allow SQL Server to scale memory usage to meet ever changing database needs. To ensure that SQL Server is dynamically allocating memory, right click on your SQL Server instance in Enterprise Manager and select properties. Click on the memory tab and make sure that the Dynamically configure SQL Server memory radio button is selected.
You can also try to ensure that only SQL Server is running on your system. Any other applications will rob SQL Server of precious memory resources. Don’t let the IT Group use it as a Domain Controller!
Another suggestion is to shut down any unneeded processes. From time to time, I will bring up the Task manager and click the processes tab to see what’s running. If I do not recognize the process, I throw the process name into a search engine to get a complete description, this helps me determine whether or not the process should be killed.
If you have more than 4GB of memory, learn how to configure AWE memory support to ensure that SQL Server is effectively utilizing all of its available memory resources. Adding additional memory might be your best option. The amount RAM that can be used is both operating system and SQL Server version dependant so make sure you know what your limitations are before you make any purchases.
Buffer Manager: Buffer cache hit ratio displays the percentage of SQL Server activities that are processed in memory instead of disk. Again, since disk through put is significantly slower than RAM, this value should normally be close to a 100%.
If the Buffer cache hit ratio is running < 85-95%, consider setting up SQL Profiler Trace to try and see which T-SQL statements are the culprits and check to see if they could use a little performance tuning. As mentioned above, if you have more than 4GB of memory, learn how to configure AWE memory support to ensure that SQL Server is optimally utilizing its memory resources.
If SQL Server is effectively using all of its memory the find out what your system requirements are and consider buying more RAM. The more RAM you have, the less likely SQL Server will page its processes to disk.
Using the Daily SQL Performance Monitor report
The main purpose of these performance monitoring reports is to provide a visual representation of how the server hardware is performing and to determine if there are any resource bottlenecks. Bottlenecks occur when the hardware cannot keep up with the demands of the software. In the example above, you can see that as the Pages/sec counter begins to peak, the Buffer cache hit ratio begins to decrease. This can mean the memory subsystem capacity has been exceeded and that operations that would normally be handled in memory are now being paged to disk. This would be considered a possible memory bottleneck. Things to take into account when experiencing a hardware resource bottleneck are:
· Consider the user experience during peak load times. Even though you may see possible bottlenecks occur through out the day, the user experience may not be affected. You may just want to keep an eye on things and see if the trend begins to increase over time. In other words, “peak usage spikes do not always necessitate an immediate or drastic action”.
· Even if the users aren’t complaining, it may be a good idea to fire up the SQL Profiler and trap the T-SQL Statements that are running during these peak usage times. You may find poorly written T-SQL queries to be the cause of your peak usage spikes.
· If you find that both the users are being negatively affected and that there are no poorly written T-SQL code, it is probably time to upgrade your hardware.
You can also use the report for :
· Daily usage trends and what are the peak usage times.
· Granular detail of system resource utilization
· Correlate peak usage to the user experience
· Compare counters from each subsystem to get a more complete picture
For Example, Page/sec exceeding 20 may not be an issue if the Buffer cache hit ratio stays with in the > 85 -95% range.
Build the Monthly SQL Server Performance Monitor Report
I have designed the Daily SQL Server Performance Monitor Report to be easily reconfigurable to both a Monthly and Yearly report.
- Make a copy of the Daily SQL Performance Monitor.rdl and name it Monthly
- Open the report in Visual Studio .NET
- Remove the CounterDay parameter and in the WHERE clause, take out “AND CounterDay = @CounterDay”
- Take out the CounterTime field from the chart and add the CounterDay field in its place. Ensure that the chart sorts by the CounterDay field.
- In the Drop data fields here box, double click the Counter Value field. Change the =SUM(Fields!CounterValue.Value) to = MAX(Fields!CounterValue.Value) Change the Series label: from Counter Value to MAX Counter Value. Drag and drop a second CounterValue field into the Drop data fields here box. Double click this second Counter Value field. Change the =SUM(Fields!CounterValue.Value) to = AVG(Fields!CounterValue.Value) Change the Series label: from Counter Value to AVG Counter Value and Click OK.
- Change the Report title from Daily to Monthly
Using the Monthly SQL Performance Monitor report
Like the Daily SQL Performance Monitor report, the monthly report provides useful information about your hardware resources. This report provides a summary of average daily usage. You can use this report to:
· Determine the peak usage days for a particular month
· Load balancing. For example, lets say you notice that two very resource intensive reports are being run during the same peak usage times. You can talk to the users of these reports and request that they run their reports at known off peak hours to help evenly distribute the load on the SQL Server.
· Subtle rises in system resource usage can be monitored to assist in system upgrade planning.
Build the Yearly SQL Server Performance Monitor Report
- Make a copy of the Monthly SQL Performance Monitor.rdl and name it Yearly
- Open the report in Visual Studio .NET
- Remove the CounterMonth parameter and in the WHERE clause, take out “AND CounterDay = @CounterMonth”
- Take out the CounterDay field from the chart and add the CounterMonth field and CounterWeek field in its place. Ensure that the chart sorts by the CounterMonth field then by CounterWeek.
- Change the Report title from Monthly to Yearly
Using the Yearly SQL Performance Monitor report
This report shows the average usage per week. The main purpose for this report is to look for trends that will help you plan your next system upgrade. As with most SQL Servers, more and more databases are being added and with these new databases come new users. Because of this upward trend in utilization, you will be able to use this report to help project when you will need to update your systems or when to buy new servers.
Using Microsoft’s perfmon.exe and SQL Reporting services to monitor hardware performance can help you find out where the bottlenecks are occurring on your server. Knowing where the bottlenecks exist is half the battle and can lead to making smart decisions to ensure the efficiency of your systems. This information can also help system administrators come up with good strategies for future hardware upgrades and\or new server acquisitions.
Overall this is a nice “howto” that I think many sys admins will find extremely useful.