It’s been a while…

So I know I have blogged in a while and that is bad on my part. I have been very busy with the new job that blogging just doesn’t make the list.  It has also been a struggle figuring out what I can share and what I should share. Well, I finally have found something I can share that has been plaguing me for a while and I think will be beneficial to share.

I run a Mac to do my daily work. I will run VMs or RDP sessions to Windows boxes so I can do SQL work. I have been struggling with trying to be more efficient and not having to use Windows if I do not have to. Plus, if I can stay on the Mac side, this will aid me in my current goal to learn Linux, containers, and other database platforms. To be able to use my Mac to do more SQL Server related work, I need to be able to connect to various different SQL Servers to run queries. I have an environment where I will connect to a development domain for most of my work and a production domain for things that I do that are customer facing. I also want to be able to use Windows Authentication to connect to these various SQL Servers. This has proven to be difficult on my Mac as I do not have my Mac joined to a domain.

I have heard of other members of the community doing this, but I kept struggling. I had read this blog post several times, but nothing worked.

Then it dawned on me and I realized my error. To use Windows Authentication, you must have Kerberos working. This was the missing link. To have Kerberos working you have to have a kerberos ticket and the server you want to connect to must have a Service Principal Name (SPN) created for it.

First, I made sure to run the Microsoft Kerberos Configuration Manager for SQL Server. You can download that here. This should be run from a Windows box that is joined to the domain. When you run this against one of your SQL Servers, it will scan your Active Directory to see if an SPN exists for the SQL Server. If there is an issue, you will see the below

 

Under the status column, you can see that it says Missing. That means that an SPN is not created for this SQL Server. The Kerberos Configuration Manager will provide you a script to fix this for you. You will want to provide this script to your Domain Admin so they can run it and correct the issue. Once this has been run and an SPN has been created, you can use Kerberos for Windows Authentication. If you don’t have an SPN, Windows will fall back to NTLM, but in the Mac world, you cannot fall back to that and thus, cannot use Windows Authentication.

 

Second, after the SPN is in place, open up a terminal and run

kinit username@DOMAIN.COM

username will be what you currently use to log in to your Active Directory Domain, while domain.com should be the domain name, but it must be in ALL CAPS.

You will get asked to authenticate by inputting your password. If everything is correct, you can run the next statement of

klist username@DOMAIN.COM

You should get similar output below
Credentials cache: API:BA74140D-7B6E-49B5-93E1-EA069AC87B20
Principal: chris.lumnah@RANGERS.LAB

Issued Expires Principal
Mar 20 20:05:58 2019 Mar 21 06:05:52 2019 krbtgt/RANGERS.LAB@RANGERS.LAB

Now you can use Azure Data Studio to connect to your SQL Server on Windows using Windows Authentication.

New job…new thinking?

So I recently started a new position as a SQL Server Solutions Architect at Rubrik Inc. We concentrate on data protection of your data center. I will be concentrating on SQL Server specific protection. Hit up the website or search YouTube for some videos if you want to know more.

Backup and recovery has always been something I am interested in. It was the first skill I learned when becoming a DBA. I wrote my own database backup process using SSIS before Powershell was a thing, that was highly dynamic, configurable, and most importantly simple to support. With the introduction of the Data Domain, I learned how to optimize the performance of a backup and restore. Over my career, I have learned that backup and recovery and data protection is probably the single biggest job the DBA has.

In my new role, it is expected of me to keep current and more importantly learn new things. SQL Server 2016 introduced a feature called Stretch Database. Essentially, this feature allows you to take data in a table or tables and put part of your data in Azure and part stays local. Conceptually, this works similarly to table partitioning but mechanically it is different. Table partitioning separates your table into partitions based on a query function. All of your data is in the table, it is just reorganized into those partitions so queries can run more efficiently against smaller subsets of data. Stretch is similar in that you have a filter query that says what data stays local and what goes to Azure. The big difference here is that the data going to Azure actually leaves the local database.

Now, why would you do this? A good example of why, may be a large sales order system. For the most part, your queries may run against the most recent data. By keeping that data local, queries can return fast. Data that needs to read that old data will have to reach into Azure to get it. Stretching a database allows you to place that old data upon cheaper storage. Yes, this data will probably take longer to query, but that is by design. Place the old, less used data on cheap storage to free up the local faster storage for more often used data.

I started to think about the implications of backup and recovery. How this affects RPO and RTO.

Backup

Thankfully, to back up a database that has been stretched to Azure existing native backup scripts do not need to change. This sounds great, I can offload a bunch of data to the cloud and nothing has to change from a backup and recovery perspective.

Well not so fast.

When you do a native SQL backup, SQL Server will perform what is called a shallow backup. This means that the backup will contain only the local data and eligible for migration at the time the backup runs. Eligible data is the data is not yet migrated, but will be migrated to Azure based on the query function.

So what happens to the data that has been moved to Azure? Azure will backup that data for you automatically with storage snapshots done every 8 hours.

See this link for more details.

Restore

So I have a backup or really backups of my data, how do I do a restore? Well it depends….Where is the failure?

If the failure you are recovering from is local,

  1. Restore your local database as you normally would from a native SQL backup.
  2. Run the stored procedure sys.sp_rda_reauthorize_db to reestablish the connection between the local database and the Azure stretch database

If the failure is in Azure,

  1. Restore the Azure database in the Azure portal
  2. Run the stored procedure sys.sp_rda_reauthorize_db to reestablish the connection between the local database and the Azure stretch database

While the above looks simple, it can be bit more involved as you walk through the details  here.

Ultimately, stretch database is a cool piece of technology that makes you think differently. It forces you to think more of a recovery strategy than a backup strategy. You have to test out how to restore this type of database to understand the steps involved. You want to have a run book that shows what you need to do to recover a database that is involved with stretch. You want to plan out your backups to most effectively give you a successful recovery. Essentially, you need a recovery strategy, not a backup strategy.

 

Hmmm, sounds like something Paul Randal has been saying for a while. So basically the same thinking, just reenforced.

TSQL Tuesday #96: Thanking those that helped along the way

This month’s TSQL Tuesday fits into the theme of the month. Thanksgiving. The TSQL Tuesday invitation is being sent out by Ewald Cress (b|t).

There have been many people who have helped my career along the way. Most have indirectly helped via their blog posts, videos, tweets, training classes, user group topics, SQL Saturday Sessions, or PASS Summit sessions. Those people have spent countless hours to help the community learn and get better by the work they do and share. Some of these people I have had the pleasure of meeting and thanking in person, some I have not had that chance yet.

While I have crossed paths with some incredibly smart people in my career, there are those that have left a great impact.

I started my career as a developer and fell into the DBA role. The first role I had was a good starting role for a DBA. Small environment, a few databases, and exposure to both OLTP and Data Warehousing. I could learn on the job and quickly grasp the concepts of what needed to get done to keep things going relatively smoothly. However, I had no measuring stick. I had no way to determine if I was doing a good job other than no one was at my desk to complain that the data warehouse was down. I went out looking for a new position in the field to give me a greater challenge and the ability to learn the things I did not know. I went out for an interview, which I did not think went all that great. Questions were asked that I admittingly did not know the answers to. This brings me to the first person that had great impact on my career.

Scott Hitchcock

He saw that I did not know certain things, but also saw that I was hungry to learn. He decided to recommend me to his boss to hire me. Shortly thereafter I started as a Jr DBA. I just went from an environment that was basically the size of a swimming pool to one the size of Lake Ontario. He taught me tons about handling a large environment, clustering, HA/DR, how to do performance troubleshooting and tuning, how to handle changes in a good way, security, when to pull the trigger on an action and when not to, and many more things. Some of these things I had done at my previous environment, but if I ran into a wall, that was it, I could go no further as I did now know how to get past the wall. Under Scott, the wall got further away, also, when I did get to the wall, I now had someone I could ask for help. Getting stuck was now a delay to me, not an all-out halt. I learned so much during my 4 years working with him. He allowed me to ask questions when needed, and work through an issue on my own when needed. Most importantly, when I screwed up he’d course correct.

Rick Heiges and Larry Chestnut

I entered the world of consulting which is like stepping onto an alien world. No longer was I exposed to one environment which its unique set of challenges. I was now being exposed to a different environment each week, with a different set of challenges and requirements. I now must adjust my thinking each week to solve a different set of problems and a new set of rules. I was thankfully paired up with two architects that would guide me through various challenges. They elevated my thinking to a higher level from the super low-level details of a project to a higher-level thinking of an architect. They taught me to understand the bigger picture and apply that bigger picture knowledge to the low-level details that I was accustomed. They also both taught me how to present the work I was doing at different client sites to their different audiences. At every client, there are always two audiences. The first being the day-to-day worker, their in-house DBA or other IT staff. The second being management, whether C-level or other. While both audiences appreciate the level of detail I will go to in solving a problem, they are not necessarily both interested in hearing all those details. The IT worker may want to, so they can understand what is going on and resolve the issue at hand, but the management level does not. They want to know how fast the issue can be fixed and how much impact the solution will have. The management level needs to be less technical and detailed as the IT staff. I know feel comfortable talking through a solution with either audience due to these two.

Mike Hillwig and Andy Mallon

Neither of these two people I have had the pleasure of working with, yet still both have had impact on my career. While both are highly skilled DBAs, their impact comes from my interactions with them at user group meetings. While working as a consultant, I had to learn to present, but secretly I did not like doing so. I love doing the work, but hate being the center of attention. I would much rather be behind the scenes making things awesome than in the front of everyone. I am pretty sure Mike and Andy both understand this about me, yet don’t care. Generally, when I go to a user group meeting, I will sit in the back and listen in, somewhat participate in the discussions, and work on something I was dealing with at the office. One of these times, Mike was looking for speakers for upcoming user group sessions. He was telling everyone that we all have a story to tell and can share something with the group that will help members of the group. He then went on to say he is going to pick someone randomly to do a lightening talk at the next meeting. He stopped what he was doing, put his hands on the table and then just proceeded to stare at me, until I looked up. I was his random pick. As he says, I was volun-told to present. He later volun-told me to speak at SQL Saturday Boston.

While Mike’s involvement with the user group has lessened, Andy’s has picked up. He has worked tirelessly to make Boston SQL into a strong user group, while also becoming the organizer of the SQL Saturdays in the Boston area. While these two things alone are enough to say thank you for, Andy is always pushing me to present more or blog more. I have had many private conversations with Andy that have left me thinking and inspired to do more. He has also made me realize that while part of me still hates presenting, not out of fear, but out hatred of being the center of attention, another part of me likes it. The part of me that likes it, is the part of me that likes to help people. I like to see others learn and get better. It is a great feeling when you can help someone and see that person grow.

Each person in this post has had great impact on my career, and I cannot thank them enough. Each person has helped me grow and get better over the years. Scott imparted great technical skills. Rick and Larry added to and sharpened those technical skills and helped me translate them into ways different audiences could understand. Mike and Andy got me to get out of my head and start to give back.

Automatic Seeding Error

In my lab, I decided to play around with the automatic seeding functionality that is part of Availability Groups. This was sparked by my last post about putting SSISDB into an AG. I wanted to see how it would work for a regular database. When I attempted to do so, I received the following error:

Automatic Seeding Error
Cannot alter the availability group ‘Group1’, because it does not exist or you do not have permission. (Microsoft SQL Server, Error: 15151)

This seemed odd to me. What would cause this error? What changed in my environment?

First, I realized that I did break down my AG and recreate it in management studio. When I created the group, I did not put any databases into the group, so I cold test automatic seeding.

Next, I decided to go through the process and instead of using the GUI, I scripted out the commands and would run manually. This way I could pinpoint where the process was failing.

--- YOU MUST EXECUTE THE FOLLOWING SCRIPT IN SQLCMD MODE.
:Connect sql2016n1

USE [master]

GO

ALTER AVAILABILITY GROUP [Group1]
MODIFY REPLICA ON N'SQL2016N2' WITH (SEEDING_MODE = AUTOMATIC)

GO

USE [master]

GO

ALTER AVAILABILITY GROUP [Group1]
ADD DATABASE [WWI_Preparation];

GO

:Connect sql2016n2

ALTER AVAILABILITY GROUP [Group1] GRANT CREATE ANY DATABASE;

GO

When I ran the above code, the steps that were to run on SQL2016N1, succeeded, but those that were to run on SQL2016N2, failed. Additionally, I received the below message:

Msg 15151, Level 16, State 1, Line 24
Cannot alter the availability group ‘Group1’, because it does not exist or you do not have permission.

I went and looked at the AlwaysOn High Availability folder in SSMS on SQL2016n2 and found nothing.

2017-05-15_17-07-58

So, the group cannot be given permissions to create any database, because the group did not exist. Then it dawned on me as to why.

Creating an availability group, without databases using the GUI, only creates the group on the primary. It does not reach out to the secondary replicas to join those nodes to the group. This must be done manually. So to resolve the error, I had to run this code on SQL2016N2:


USE [master]
GO
ALTER AVAILABILITY GROUP Group1 JOIN
GO

This allowed the secondary replica to join the AG that was established on SQL2016N1. Then I was able to proceed with the next step of running


USE [master]
GO
ALTER AVAILABILITY GROUP [Group1] GRANT CREATE ANY DATABASE;
GO

Once this was completed, the database started seeding from one node to the other.

I think the biggest takeaway from this is the importance of using a script to do DBA actions instead of the GUI. It allows for a more focused and deliberate execution, where you can see things happen in order. This, in turn, teaches you and allows for greater flexibility, when things go bump.