Zero to hero

The Mission
Recently I was asked to take a look at a customer database and make recommendations on how and which code they can improve. The deadline for the assignment was extremely tight (3 days). I had to go through just over 7000 stored procs at look at the design of 1323 tables in the process.

Light bulb Moment
I remembered from a while back at a SQLCape user group meeting someone recommended Qure Optimizer. I decided to download the trail version and give it go. As I like tools that make my life a bit easier so I can spend more time on R&D.

What it does
What happened next was awesome. Why? Because I didn’t expect this level of detail Qure Optimizer provided. But before you start please do not run this on your live environment. Please run it against a backup of your live environment and then implement the changes afterwards with the scripts provided.
It restructured stored procs
Add new indexes
Modified existing indexes
Rewrote queries
Tweaked the schemas

Happy DB , Happy DBA
Multiple workload sources: Qure Optimizer supports SQL Server workloads captured in one or more trace files or trace tables, as well as workloads captured by select 3rd party monitoring tools (in my case SQLSentry).
Large workloads supported: Qure Optimizer supports SQL Server tuning with large workloads consisting of many millions of queries. Trace files up to 10GB are supported.

Holistic balancing of performance benefits: Qure Optimizer tunes the entire workload holistically. When recommending any specific improvement (eg, “add an index”), Qure Optimizer considers the SQL Server performance effects of that recommendation across the entire workload, not just on an individual query.

Before-and-After Performance Benchmark
SQL Server Performance benchmark: Every recommendation is automatically benchmarked against the copy-of-production database. Qure Optimizer replays the workload, applies the recommendations, and then re-runs the workload to measure actual performance improvement.
Performance metrics: For each query, the performance improvement achieved is shown via a range of metrics: Duration, Physical Reads, Logical Reads, CPU and Writes.
Validation of recommendations: The benchmark also validates the functional correctness of the recommendations. For example, in the case of SQL rewrites, the benchmark verifies that the result sets returned by the original SQL and the rewritten SQL are identical.
Predictive knowledge: Thanks to the performance benchmarks, the benefits of the selected SQL Server tuning recommendations are known and quantified in advance of deploying the recommendations to production.
SQL Server Tuning Recommendations – Deployment to Production
Apply scripts: Executable scripts are provided for applying the recommendations to production, selectively or in bulk.

Hero Status
By using Qure optimizer I was able to complete my tasking within a couple of hours and thus impressing the powers that be and them hiring me as to look at their other SQL environments (Consultant dream come true). What was a big bonus for me is the fact that Qure optimizer works with SQL Sentry’s monitoring tools.
Check it out
So buy yourself a from me to me gift this Christmas and checkout the  Qure Optimizer  video on YouTube

 

 

Advertisements
Video | Posted on by | Leave a comment

Awesome new tool , T-SQL Unscrambler…just had to share

As SQL Server professionals we all rely on up to date documentation from clients as a way for us to get to know their environments quicker. More often than not it is either outdated or they don’t have any, which makes our task a bit more difficult as we now have to go through their servers and databases individually which adds days to the engagements.

Recently I faced this situation again but I was prepared. I have recently started using GenisiOne Solutions T-SQL Source Code Unscrambler to help me understand and document client environments within hours instead of weeks. GenisisOne is also the first partner chosen by Redgate to be part of its SSMS Ecosystem.

Why I decided to use GenisisOne?

My client is in the financial services industry. They had a tight deadline and a very limited budget. After showing my client the results they handed me more work J

The database while not big is used by their main application.

925 Tables

832 other database objects including stored procedures, views and functions

And for good measure add one week to learn the environment and make recommendations on how to improve database performance … did I add no documentation?

ContinueUnlock Trial Account

Enter your product key and click on activate

Add server

Now enter your server name(s) then click add server. You will notice that we support both SQL and windows authentication. You will notice on the right side pane you are able to remove servers you no longer wish to document

ServerExplorer

In the left pane you can view the database objects and on the right you are able to see a graphical view of your table and its dependencies

Next expand the view tree and select a view. As you can see there are four tabs each giving more insight into your view(s) . 1st  is the data flow chart , its gives you a visual flow of your view as well as which objects make up the view

ViewExplorer

Next is the property tab showing you everything in a tabular form

PropertyView

I like the next tab , the summary tab its tells in English the story of you objects code !!! Awesome!! This save a hell of a lot time having to explain to people what the code does

Summaryviewcode

The next tab is the dependency viewer in a tabular form. It tell what makes up your object in my case I chose a view and as you can see it tells me which schema.table.collumn makes up my view.

DependencyView

Last but not least is the dependency diagram which gives you a high level view of your object. As you can see the each colour represents a different object.

DependencyDiagramview

On the top right hand side of the pane you have the option to export your object to either .PNG , PDF or SVG formats or if you do not want to export objects individually  you can document your entire database within seconds into PDF format. To do this simply right click on the database and click generate report as shown below.

SaveGenisis

Now select a location to save the pdf to and click save. At the bottom of the left pane you will see a progress bar indicating how long it will take to complete your document. Once completed (very very quick) you will get the download completed screen

Capture

After which you can now open the documentation for your entire database.

To help you get out of the block quicker please watch the demo video  below

I managed to finish within 5 days and make the required recommendations. I have also asked .NET MVP Dirk Strauss (T|B)  to give GenesisOne a go . His feedback “As a developer GenisisOne allows me to learn my client(s) environment fast allowing me to produce results faster.

What I would like to see

SSAS support

SSIS support

SSRS support

The ability to generate reports for entire instances instead of per database

The option to save generated files in a custom location

From all the tools that I have used in the past T-SQL Unscrambler is different as it is not a traditional documentation tool but a visual aid for SQL professionals.

Should have any suggestions on how we can improve on the product drop us a mail at

CodeVisualization@GenesisOneSolutions.com  . We love the SQL community and we want get involve by sponsoring your community drop us a mail at sqlrocks@genesisonesolutions.com

Posted in Uncategorized | Leave a comment

SQLSaturday Cape Town 06 September 2014

It’s that time of the year again. SQLSaturday Cape Town is around the corner 06 September with pre-cons on 05 September.  We will have 3 pre-cons  on SQL Perf tuning , SSIS internals and BI . For details on the pre-cons click here  This years event will take place at Protea Hotel Techno Park in Stellenbosch. To register for this year’s event click here

This years line up is awesome!!! Speakers include SQL Server MCM’s & MVP’s from around the globe  but most all local speakers which is awesome!! Check the schedule here

Hope to see you there

Jody

MVPLogo_ColorSQLSAT301_web SQLSAT327_web SQLSAT327_SPEAKING SQLSAT301_SPEAKING

 

Posted in Community, SQLSaturdays, Uncategorized | Tagged , , | Leave a comment

SQL 2012 –Distributed Replay

The Microsoft SQL Server Distributed Replay feature helps you assess the impact of future SQL Server upgrades. You can also use it to help assess the impact of hardware and operating system upgrades, and SQL Server tuning. Similar to SQL Server Profiler, you can use Distributed Replay to replay a captured trace against an upgraded test environment. Unlike SQL Server Profiler, Distributed Replay is not limited to replaying the workload from a single computer .Distributed Replay offers a more scalable solution than SQL Server Profiler. With Distributed Replay, you can replay a workload from multiple computers and better simulate a mission-critical workload. The Microsoft SQL Server Distributed Replay feature can use multiple computers to replay trace data and simulate a mission-critical workload. Use Distributed Replay for application compatibility testing, performance testing, or capacity planning. In this session we will cover the following:

  • What is SQL Server Distributed Replay
  • Distributed Replay Concepts
  • Benefits of Distributed Replay
  • When to Use Distributed Relay

But before we get there we need to look at risk. Do not run it against your prod environment.
ONLY use DRU in a test environment
as it
:

Can be a security risk to sensitive information

Only uses Windows Authentication

Only supports the SQL Server Native Client ODBC data access provider

What is distributed Replay functionality in SQL Server 2012?

  • It was introduced in SQL 2012
  • Allows us to assess SQL, OS and hardware upgrades
  • Allows us to do Performance tuning or troubleshooting
  • DRU can run in 2 modes – Synchronization or Stress mode

    Distributed Concepts

  • Distributed Replay administration tool: A console application, DReplay.exe, used to communicate with the distributed replay controller. Use the administration tool to control the distributed replay.
  • Distributed Replay controller: A computer running the Windows service named SQL Server Distributed Replay controller. The Distributed Replay controller orchestrates the actions of the distributed replay clients. There can only be one controller instance in each Distributed Replay environment.
  • Distributed Replay clients: One or more computers (physical or virtual) running the Windows service named SQL Server Distributed Replay client. The Distributed Replay clients work together to simulate workloads against an instance of SQL Server. There can be one or more clients in each Distributed Replay environment.
  • Target server: An instance of SQL Server that the Distributed Replay clients can use to replay trace data. We recommend that the target server be located in a test environment.
  • The Distributed Replay administration tool, controller, and client can be installed on different computers or the same computer. There can be only one instance of the Distributed Replay controller or client service that is running on the same computer.

Distributed Replay Architecture

Benefits of DRU

  • It is not limited to replaying a Workload on one machine
  • Better simulate mission critical workloads
  • Use Distribution Replay for application compatibility testing, performance testing, or capacity planning.
  • Provides better scalability than profiler as DRU is scalable up to 16 clients
  • The Microsoft SQL Server Distributed Replay feature can use multiple computers to replay trace data and simulate a mission-critical workload. Use Distributed Replay for application compatibility testing, performance testing, or capacity planning.

When to use the tools

SQL Server Profiler

  • You want to use the conventional replay mechanism on a single computer. In particular, you need line-by-line debugging capabilities, such as the StepRun to Cursor, and Toggle Breakpoint commands.
  • You want to replay an Analysis Services trace.

Distributed Replay

  • You want to evaluate application compatibility. For example, you want to test SQL Server and operating system upgrade scenarios, hardware upgrades, or index tuning.
  • The concurrency in the captured trace is so high that a single replay client cannot sufficiently simulate it.

For the video on this please go to the  Distribution Replay  webinar I did on this for Pragmatic Works.

Posted in DBA | Tagged | Leave a comment

The Big Difference

In this post I want to clarify the difference between High Availability and Disaster Recovery from a DBA point of view. If you current DR plan looks like Dilbert’s best you read this post.

Things we will cover:
·       What is disaster recovery
·       Why do we always need to plan for a disaster
·       Tools we can use to make disaster recovery easier
·       High availability tools SQL provides us
·       Recovery facilities

What is disaster recovery?
Disaster Recovery is the process you follow to recover your environment from scratch.  A disaster can be anything from a flood, fire, power outage, explosion to a human error that affects your primary data centre and forces your organisation to run on minimum resources (not fun. Been there done that… hope not to do it soon again).

Why we always need to plan for a disaster
Ask yourself: Do you want to stay in business?
If your answer is yes then you understand why a DR and HA plan is important to the organisation. A plan doesn’t only allow others to understand your thought process but helps you recover your systems faster as you know exactly what needs to be done and can relay your message faster.

Recovery from a disaster can take a while depending on the processes your organisation has in place. As the DBA you are responsible for your company’s SQL Servers and as a DBA you always have to think ahead. Here are a few things to help you in setting up a DR plan

Diagram

A few things to help aid your recovery:

Visualisation
Talk to your business users first. Why, you may ask? They are the ones who will give you an indication of which systems are important and what they require in order to function with minimal resources. Sit with your team and visualise your tier 1 systems (for purpose of this blog we focus on tier 1 servers). By this I mean draw a picture and put it up against the walls of your cubicle.

This way you will not forget how the databases and applications fit together. You also need to include the SQL jobs (please script this out or useBiDocumenter), script out all your logins, I would recommend Idera’s SQL permissions extractor.

Documentation
One of the key things you require is to keep your documentation up to date. Use a third party documentation tool like BiDocumenter by Pragmatic Works. It documents the entire SQL Server stack. If all hell breaks loose at least you will be able to recover your database structures as the documentation includes all the structures and a sample of the data. Below you see such a sample. If you cannot use this software I would recommend using Red-gate‘s virtual restore. It allows you to restore databases to your local machine using minimal space (I think it was 10mb), the database is readable. I used it a few times when our production restores to our support server were running late. It allowed me to buy some time until the restores completed.

Always test your backups 

Test your backups regularly, whether they are disk or tape backups. This ensures that you have a valid backup set. Remember, a DBA is only as good as his/her last backup.

Test page

This will be your first step when your start the recovery process.

Disaster

Most organisations test the DR plan twice a year (at least the clients I worked for). This allows you to always improve the plan and to ensure that the plan fullfills all audit requirements set by your organisation’s risk management division.

From my own experience: Back in 2009 our data centre got flooded and almost all our servers were taken out. Luckily my colleague and I did not panic and recovered our tier 1 database servers in a few hours. We had tested our DR strategy twice a year and for a few years we had no worries until the disaster arrived.

High Availability
Over the last 13 years I have performed and designed a few Data Recovery (DR) plans as well as recover systems based on those same DR plans. High availability is your top priority. As a DBA you are responsible for the design and its implementation. Many organisations use a combination of hardware and software technologies to ensure that they meet their SLA’s.

What is high availability?
There are still some people who believe that HA and DR are the same thing. They are not!
In a nutshell, High Availability is the mechanism used via various designs and technologies that ensures that your applications are up when your server(s) malfunctions. It also helps IT departments to better adhere to their SLA’s.

There are lots of ways to ensure the 99.9% uptime that most SLA’s demand.
Hardware vendors like HP provide SAN replication software allowing you to copy the .ldf and .mdf files to a certain point in time. A couple of years ago we used the software to successfully restore one of our production systems to our DR site. It took 15 minutes to restore a system that usually took me 8 hours to restore without using the software (Happy dance).

From a server side you can also look at Hyper-V, VMware, Windows failover clustering or a combination of them. This you will need to discuss with the system stakeholders and figure out which option suits you best. The best way to find out which option that is, is to play around with the various technologies and find your flavour.

SQL Server provides us with some built-in HA features

Availability

Always ON High Availability groups: leverages off WSFC (Windows Failover Clustering) and allows you to add your precious cargo (databases) into HA groups and fails over automatically (faster than traditional cluster). Add an AG listener and your users won’t even know which server they are using.
Database Mirroring: is a solution for increasing the availability of a SQL Server database. Mirroring is used on a per-database basis and works only with databases that use the full recovery model.
LogShipping: log shipping operates at a database level. You can use log shipping to maintain one or more warm standby databases (referred to as secondary databases) for a single production database that is referred to as the primary database.
Replication: is a set of technologies for copying and distributing data and database objects from one database to another and then synchronizing between databases to maintain consistency. Using replication, you can distribute data to different locations and to remote or mobile users over local and wide area networks, dial-up connections, wireless connections, and the Internet.
For more information on SQL Server HA options please get in touch with us.

So what is the big difference?
Disaster recovery is the process we follow when having to recover systems from scratch in the event of a catastrophe. High availability can best be described as the business continuity process we follow when (one of) our server(s) goes down and we need to failover automatically with minimum impact to the business users.

Posted in DBA, DR | Leave a comment