About Greg Moore

Founder and owner of Green Mountain Software, a consulting firm based in the Capital District of New York focusing on SQL Server. Lately I've been doing as much programming as I have DBA work so am learning a lot more about C# and VB.Net than I knew a couple of years ago. When I'm not in front of a computer or with my family I'm often out caving or teaching cave rescue skills.

Mistakes were made

I generally avoid Reddit, I figure I have enough things in my life sucking my time. But from time to time one link comes across my screen that I find interesting. This is one of them.

The user accidentally deleted a production database. Now, I think we can all agree that deleting a database in production is a “bad thing”. But, whose fault is this really?

Yes, one could argue the employee should have been more careful, but, let’s backup.

The respondents in the thread raise several good points.

  • Why were the values in the documentation ones that pointed to a LIVE, production database? Why not point to a dev copy or even better yet, one that doesn’t really exist. They expect the person to update/change the parameters anyway, so worst case if they put in the fake ones in is, nothing happens.
  • Why didn’t Production have backups? This is a completely separate question, but a very important one!
  • Why fire him? As many pointed out, he had just learned a VERY valuable lesson, and taught the company a very valuable lesson too!

I’ll admit, I’d something similar in my career at one of my employers. Except I wasn’t an intern, I was the Director of IT, and my goal in fact WAS to do something on the live database. The mistake I made was a minor one in execution (reversed the direction of an arrow on the GUI before hitting the Execute button) but disastrous in terms of impact. And of course there wasn’t a recent enough backup.

I wasn’t fired for that.  I did develop and enforce our change control documents after that and always ensured, as much as possible that we had adequate backups. (Later in a my career, a larger, bigger muckup did get me… “given the opportunities to apply my skills elsewhere”, but there were other factors involved and I arguably wasn’t fired JUST for the initial issue.)

As the Director of IT, I made a point of telling my employees that story. And I explained to them, that I expected them to make mistakes. If they didn’t they probably weren’t trying hard enough. But I told them the two things that I wouldn’t accept would be lying about a mistake (trying to cover it up, or blame others, etc) and repeatedly making the same mistake.

I wrote in an earlier post that mistakes were avoidable. But as I pointed out, it’s actually more complex than that. Some mistakes are avoidable. Or, at least they can be managed. For example, it is unfortunately likely that at some point, someone, somewhere, will munge production data.  Perhaps they won’t delete it all, or perhaps they’ll do a make a White Ford Taurus type mistake, but it will happen.  So you have safeguards in place. First, limit the number of people in a position to make such a mistake. Second, have adequate backups. There’s probably other steps you can do to reduce the chance of error and mitigate it when it does eventually happen. Work on those.

But don’t fire the person who just learned a valuable lesson. They’re the one that is least likely to make that mistake again.  Me, I’d probably fire the CTO for not having backups and for having production values in documentation like that AND for firing the new guy.

SQL Saturday NYC 2017 Recap

Realizing that perhaps SOME entry is better than no entry, I figure I’d write a short one.

This weekend, spend about 27 hours in NYC with my wife Randi. While one goal of the trip was definitely a min-vacation, the actual impetus was again a SQL Saturday. Again, I was selected to present my talk, “Tips that Saved My Bacon”.  Apparently it was well received since the people nice enough to give feedback on the provided forms gave me top notches across the board.

While that’s always refreshing, it does make me wonder about what the other folks thought. Did they go away satisfied, feeling that there was no useful feedback to provider?  Or did they feel they couldn’t provide information since I might be insulted? Or did they simply not bother?  I’ll never know.

I’ll say now, that good feedback is always appreciated by me. (And feedback that I’m good is always an ego boost 🙂

I’ll be presenting again in a couple of weeks at SQL Saturday Philadephia, this time two talks, again my Bacon talk and my IT and Plane Crashes. A twofor if you will.

But, this weekend got me thinking about my weekends this year. I’ll have spent at least 15 days on Cave Rescue stuff (several weekends plus a week of teaching) and at least 4 SQL Saturdays (Chicago (passed), New York City (passed), Philadelphia and Albany) and 3 days at SQL Summit. So that’s 26 days at least donating time to organizations that I believe strongly in.

What do you do with your time?

 

Don’t Break the Chain!

If one backup is good, two is better right?

Not always.

Let me start by saying I’ve often been very skeptical of SQL Server backups done by 3rd party tools. There’s really two reasons. For one, many years ago (when I first started working with SQL Server) they often simply weren’t good. They had issues with consistency and the like. Over time and with the advent of services like VSS, that issue is now moot (though, I’ll admit old habits die hard).

The second reason was I hate to rely on things that I don’t have complete control over. As a DBA, I feel it’s my responsibility to make sure backups are done correctly AND are usable. If I’m not completely in the loop, I get nervous.

Recently, a friend had a problem that brought this issue to light. He was asked to go through their SQL Server backups to find the time period when a particular record was deleted so they could develop a plan for restoring the data deleted in the primary table and in the subsequent cascaded deletes. Nothing too out of the ordinary. A bit tedious, but nothing too terrible.

So, he did what any DBA would do, he restored the full backup of the database for the date in question. Then he found the first transaction log and restored that.  Then he tried to restore the second transaction log.

The log in this backup set begins at LSN 90800000000023300001,  which is too recent to apply to the database. An earlier log backup that  includes LSN 90800000000016600001 can be restored.

Huh? Yeah, apparently there’s a missing log.  He looks at his scheduled tasks. Nope, nothing scheduled. He looks at the filesystem.  Nope, no files there.

He tries a couple of different things, but nope, there’s definitely a missing file.  Anyone who knows anything about SQL Server backups, knows that you can’t break the chain. If you do, you can’t get too far. This can work both ways. I once heard of a situation where the FULL backups weren’t recoverable, but they were able to create a new empty database and apply five years worth of transaction logs. Yes, 5 years worth.

This was the opposite case. They had the full backup they wanted, but couldn’t restore even 5 hours worth of logs.

So where was that missing transaction log backup?

My friend did some more digging in the backup history files in the MSDB and found this tidbit:

backup_start_date backup_finish_date first_lsn last_lsn physical_device_name
11/9/2016 0:34 11/9/2016 0:34 90800000000016600000 90800000000023300000 NUL

There was the missing transaction backup.  It was a few minutes after the full backup, and definitely not part of the scheduled backups he had setup.  The best he can figure is the sysadmin had set SAN Snapshot software to take a full backup at midnight and then for some reason a transaction log backup just minutes later.

That would have been fine, except for one critical detail. See that rightmost column (partly cut-off)? Yes, ‘physical_device_name’. It’s set to NUL.  So the missing backup wasn’t made to tape or another spot on the disk or anyplace like that. It was sent to the great bit-bucket in the sky. In other words, my friend was SOL, simply out of luck.

Now, fortunately, the original incident, while a problem for his office, wasn’t a major business stopping incident. And while he can’t fix the original problem he was facing, he discovered the issues with his backup procedures long before a major incident did occurr.

I’m writing about this incident for a couple of reasons.  For one, it emphasizes why I feel so strongly about realistic DR tests.  Don’t just write your plan down. Do it once in awhile. Make it as realistic as it can be.

BTW, one of my favorite tricks that I use for multiple reasons is to setup log-shipping to a 2nd server.  Even if the 2nd server can never be used for production because it may lack the performance, you’ll know very quickly if your chain is broken.

Also, I thought this was a great example of where doing things twice doesn’t necessarily make things less resistant to disaster. Yes, had this been setup properly it would have resulted in two separate, full backups being taken, in two separate places. That would have been better. But because of a very simple mistake, the setup was worse than if only one backup had been written.

I’d like to plug my book: IT Disaster Response due out in a bit over a month. Pre-order now!

When Life hands you Lemons

You make lemonade! Right? Ok, but how?

Ok, this is the 21st Century, now we use mixes. That makes it even easier, right?

But, I’ve given this some thought, and like many procedures there’s not necessarily a right way to do it. That said, I may change the procedure I use.

Ok, so I use one of those little pouches that make a lemon-flavored drink. I’m hesitant to call it actual lemonade, but let’s go with it.

Typically my process is to take the container, fill a drinking glass and if the container is empty, or has only a little bit left in it, make more. (Obviously if there’s a lot left, I just put the container back in the refrigerator. 🙂

So still pretty simple, right? Or is it.

Basically you put the powder in the container and then add water.

Or do you put the water in and then add the powder?

You may ask, “What difference does it make?”

Ultimately, it shouldn’t, in either case you end up with a lemon-flavored drink as your final product.

All along I’ve been going the route of putting the powder in first then adding the water. There was a rational reason for this: the turbulence of the water entering the container would help mix it and it would require less shaking. I thought this was pretty clever.

But then one night as I was filling the container with water (it was sitting in the sink) I got distracted and by the time I returned my attention to it, I had overfilled the container and water was flowing over the top.  Or rather, somewhat diluted lemon-flavored was flowing over the top.  I had no idea how long this had been going on, but I knew I had an over-filled container that had to have a bit more liquid poured off before I could put it away. It also meant the lemon-flavored drink was going to be diluted by an unknown amount. That is less than optimal.

So the simple solution I figured was to change my procedure. Add the water first and then add the flavoring. That way if there was too much water in the container, I could just pour off the extra and then add the proper amount of powder and have an undiluted lemon-flavored drink.

That worked fine until one day as I was pouring the package, it slipped through my fingers into a half-filled container.  Now I had to find a way to fish it out. Ironically, the easiest way to do it was to overfill it so the package would float to the top. Of course now I was back to diluted lemon-flavored drink. And who knows what was on the outside of the powder package that was now inside the water.

Each procedure has its failure modes. Both, when successful, get me to the final solution.

So, which one is better?

I put in the powder first and then put in the water. I could say I have a rational reason like preferring slightly diluted lemon-flavored drink over a possibly contaminated lemon-flavored drink from a dropped in packet.

But the truth is, it really doesn’t matter which order I do the steps in. Neither failure is completely fatal and in fact about equivalent in their seriousness.

Old habits die hard, so I stick with my original method.

But, the point is that even in a process as simple as making lemon-flavored drink, there’s more than one way to do it, and either way may be workable. Just make sure you can justify your reasoning.

Small Disasters

Today was an interesting confluence of events. I was exchanging emails with an associate who is in the middle of getting a Master’s in Disaster Management and we were talking about scale and scope of disasters.

At about the same time I was monitoring email from one of my clients. The thread started out with a fairly minor report: Viewpoint Drive – Water Main Break. Not a huge, earth shattering disaster. Simply a notice that there was a waterline break in a nearby road and asked people if they noticed issues to let management know.

Within an hour there was a follow-up email stating that there was no longer adequate water pressure in the building and that folks should go home and finish their workday there. Furthermore, employees were told that for the next day the company was securing water bottles for drinking water and would be bringing in portable toilets.

Now, when people think about disasters, often they think about fires and other things that might destroy a building. But, that’s pretty rare.  It’s the other things that companies don’t necessarily plan for. Your company may have adequate backups of all its servers (but are you sure?) but does it have a plan for not having water?

I’ve worked with managers who have basically said, “eh, we can work around that.” Truth is, legally in most cases they can’t. If the building doesn’t have potable water and working sanitation facilities many municipalities won’t allow it to be occupied.

So does your company have a plan? Are the people who can authorize expenditures in on the loop? Who is going to declare a disaster and put the plan into motion? Who will sign for the porta-potties when they show up?  These are some of the things you have to think about.

So disasters about more than just a good set of backups. Sometimes it’s about the toilets. Think about that.

 

Deep Drilling

I was reviewing the job history on one of the DR servers of a client of mine. I noticed something funny. The last job recorded in the job history table (msdb.dbo.sysjobhistory for those playing along at home) was recorded in January of this year.

But jobs were still running. It took me awhile to track it down, but through some sleuthing I solved the problem. First, I thought the msdb database might have filled up (though that event should have generated an error I’d have seen).  Nope.

Then I thought perhaps the table itself was full somehow. Nope, only about 32,000 records.  No luck.

I finally tried to run sp_sqlagent_log_jobhistory manually with some made up job information.

Msg 8115, Level 16, State 1, Procedure sp_sqlagent_log_jobhistory, Line 99
Arithmetic overflow error converting IDENTITY to data type int.
Arithmetic overflow occurred.

Now we’re getting someplace.  After a minor diversion of my own doing I then ran

DBCC CheckIDENT('sysjobhistory',NORESEED)

This returned a value of 2147483647. Hmm, that number looks VERY suspicious. A quick check of Books Online confirmed that’s the max value of a signed int.

Now, the simple solution, which worked for me in this case was to issue a

truncate table sysjobhistory

This removed all the rows in the table AND reset the IDENTITY value. Normally I’d hate to lose history information, but since this was 6 months old and seriously out of data it was acceptable. I could have merely reset the IDENTITY seed value, but there’s no guarantee I would not have then had collisions within the table later on. So this was the safest solution.

But wait, there was more. It kept bugging me that I had somehow reached the 2 BILLION row limit for this table. Sure, it handles log-shipping for about a dozen databases and as a result does about 48 jobs an hour, plus other jobs.  But for a year that should generate less than 1 million rows.  This database server hasn’t been running for 2 thousand years.

So, I decided to monitor things a bit and wait for a few jobs to run.

Then, I executed the following query.

select max(instance_id) from sysjobhistory

This returned a value along the lines of 232031.  Somehow, in the space of an hour or less, my sysjobhistory IDENTITY column had increased by over 232,000. This made no sense. But it did explain how I hit 2 billion rows!

So I started looking at the sysjobhistory table in detail. And I noticed gaps. Some make sense (if a job has multiple steps, it may temporarily insert a row and then roll it back once the job is done and put in a job completion record, and with the way IDENTITY columns work, this explains some small gaps). For example, there was a gap in instance_id from 868 to 875. Ok that didn’t bother me. BUT, the next value after 875 was 6,602. That was a huge gap! Then I saw a gap from 6,819 to 56,692. Another huge gap. As the movie says, “Something strange was going on in the neighborhood”.

I did a bit more drilling and found 3 jobs that were handling log-shipping from a particular server were showing HUGE amounts of history. Drilling deeper, I found they were generating errors, “Could not delete log file….”. Sure enough I went to the directories where the files were stored and there were log files going back to November.  Each directory had close to 22,000 log files that should have been deleted and weren’t.

Now I was closer to an answer. Back in November we had had issues with this server and I had to do a partial rebuild of it. And back then I had had some other issues related to log-shipping and permissions. I first checked permissions, but everything seemed fine.

I then decided to check attributes and sure enough all these files (based on the subdirectory attribute setting) had the R (readonly) value set. No wonder they couldn’t be deleted.

Now I’m trying to figure out how they got their attribute values set to R. (This is a non-traditional log-shipping setup, so it doesn’t use the built in SQL Server tools to copy the files. It uses rsync to copy files through an SSH tunnel).

So the mystery isn’t fully solved. It won’t be until I understand why they had an R value and if it will happen again.  That particular issue I’m still drilling into. But at least now I know why I hit the 2 billion row limit in my history table.

But, this is a good example of why it’s necessary to follow through an error to its root cause. All too often as an IT manager I’ve seen people who reported to me fix the final issue, but not the root cause. Had I done that here, i.e. simply cleared the history and reset the IDENTITY value, I’d have faced the same problem again a few weeks or months from now.

Moral of the story: When troubleshooting, it’s almost always worth taking the time to figure out not just what happened and fixing that, but WHY it happened and preventing it from happening again.

 

Testing

This ties in with the concept of experimentation. Thomas Grohser related a story the other night of a case of “yeah, the database failed and we tried to do a restore and found out we couldn’t.”

Apparently their system could somehow make backups, but couldn’t restore them. BIG OOPS.  (Apparently they managed to create an empty database and replay 4.5  years of transaction logs and recover their data. That’s impressive in its own right.)

This is not the first time I’ve worked with a client or heard of a company where their disaster recovery plans didn’t pass the first actual need of it. It may sound obvious, but companies need to test the DR plans. I’m in fact working with a partner on a new business to help companies think about their DR plans. Note, we’re NOT writing or creating DR plans for companies, we’re going to focus on how companies go about actually implementing and testing their DR plans.

Fortunately, right now I’m working with a client that had an uncommon use case. They wanted a restore of the previous night’s backup to a different server every day.

They also wanted to log-ship the database in question to another location.

This wasn’t hard to implement.

But what is very nice about this setup is, every 15 minutes we have a built-in automatic test of their log-backups.  If for a reason log-backups stop working or a log gets corrupt, we’ll know in fairly short time.

And, with the database copy, we’ll know within a day if their backups fail.  They’re in a position where they’ll never find out 4.5 years later that their backups don’t work.

This client’s DR plan needs a lot of work, they actually have nothing formal written down. However, they know for a fact their data is safe. This is a huge improvement over companies that have a DR plan, but have no idea if their idea is safe.

Morale of the story: I’d rather know my data is safe and my DR plan needs work than have a DR plan but not have safe data.