Mistakes were made

I generally avoid Reddit, I figure I have enough things in my life sucking my time. But from time to time one link comes across my screen that I find interesting. This is one of them.

The user accidentally deleted a production database. Now, I think we can all agree that deleting a database in production is a “bad thing”. But, whose fault is this really?

Yes, one could argue the employee should have been more careful, but, let’s backup.

The respondents in the thread raise several good points.

  • Why were the values in the documentation ones that pointed to a LIVE, production database? Why not point to a dev copy or even better yet, one that doesn’t really exist. They expect the person to update/change the parameters anyway, so worst case if they put in the fake ones in is, nothing happens.
  • Why didn’t Production have backups? This is a completely separate question, but a very important one!
  • Why fire him? As many pointed out, he had just learned a VERY valuable lesson, and taught the company a very valuable lesson too!

I’ll admit, I’d something similar in my career at one of my employers. Except I wasn’t an intern, I was the Director of IT, and my goal in fact WAS to do something on the live database. The mistake I made was a minor one in execution (reversed the direction of an arrow on the GUI before hitting the Execute button) but disastrous in terms of impact. And of course there wasn’t a recent enough backup.

I wasn’t fired for that.  I did develop and enforce our change control documents after that and always ensured, as much as possible that we had adequate backups. (Later in a my career, a larger, bigger muckup did get me… “given the opportunities to apply my skills elsewhere”, but there were other factors involved and I arguably wasn’t fired JUST for the initial issue.)

As the Director of IT, I made a point of telling my employees that story. And I explained to them, that I expected them to make mistakes. If they didn’t they probably weren’t trying hard enough. But I told them the two things that I wouldn’t accept would be lying about a mistake (trying to cover it up, or blame others, etc) and repeatedly making the same mistake.

I wrote in an earlier post that mistakes were avoidable. But as I pointed out, it’s actually more complex than that. Some mistakes are avoidable. Or, at least they can be managed. For example, it is unfortunately likely that at some point, someone, somewhere, will munge production data.  Perhaps they won’t delete it all, or perhaps they’ll do a make a White Ford Taurus type mistake, but it will happen.  So you have safeguards in place. First, limit the number of people in a position to make such a mistake. Second, have adequate backups. There’s probably other steps you can do to reduce the chance of error and mitigate it when it does eventually happen. Work on those.

But don’t fire the person who just learned a valuable lesson. They’re the one that is least likely to make that mistake again.  Me, I’d probably fire the CTO for not having backups and for having production values in documentation like that AND for firing the new guy.

Advertisements

When Life hands you Lemons

You make lemonade! Right? Ok, but how?

Ok, this is the 21st Century, now we use mixes. That makes it even easier, right?

But, I’ve given this some thought, and like many procedures there’s not necessarily a right way to do it. That said, I may change the procedure I use.

Ok, so I use one of those little pouches that make a lemon-flavored drink. I’m hesitant to call it actual lemonade, but let’s go with it.

Typically my process is to take the container, fill a drinking glass and if the container is empty, or has only a little bit left in it, make more. (Obviously if there’s a lot left, I just put the container back in the refrigerator. 🙂

So still pretty simple, right? Or is it.

Basically you put the powder in the container and then add water.

Or do you put the water in and then add the powder?

You may ask, “What difference does it make?”

Ultimately, it shouldn’t, in either case you end up with a lemon-flavored drink as your final product.

All along I’ve been going the route of putting the powder in first then adding the water. There was a rational reason for this: the turbulence of the water entering the container would help mix it and it would require less shaking. I thought this was pretty clever.

But then one night as I was filling the container with water (it was sitting in the sink) I got distracted and by the time I returned my attention to it, I had overfilled the container and water was flowing over the top.  Or rather, somewhat diluted lemon-flavored was flowing over the top.  I had no idea how long this had been going on, but I knew I had an over-filled container that had to have a bit more liquid poured off before I could put it away. It also meant the lemon-flavored drink was going to be diluted by an unknown amount. That is less than optimal.

So the simple solution I figured was to change my procedure. Add the water first and then add the flavoring. That way if there was too much water in the container, I could just pour off the extra and then add the proper amount of powder and have an undiluted lemon-flavored drink.

That worked fine until one day as I was pouring the package, it slipped through my fingers into a half-filled container.  Now I had to find a way to fish it out. Ironically, the easiest way to do it was to overfill it so the package would float to the top. Of course now I was back to diluted lemon-flavored drink. And who knows what was on the outside of the powder package that was now inside the water.

Each procedure has its failure modes. Both, when successful, get me to the final solution.

So, which one is better?

I put in the powder first and then put in the water. I could say I have a rational reason like preferring slightly diluted lemon-flavored drink over a possibly contaminated lemon-flavored drink from a dropped in packet.

But the truth is, it really doesn’t matter which order I do the steps in. Neither failure is completely fatal and in fact about equivalent in their seriousness.

Old habits die hard, so I stick with my original method.

But, the point is that even in a process as simple as making lemon-flavored drink, there’s more than one way to do it, and either way may be workable. Just make sure you can justify your reasoning.

Deep Drilling

I was reviewing the job history on one of the DR servers of a client of mine. I noticed something funny. The last job recorded in the job history table (msdb.dbo.sysjobhistory for those playing along at home) was recorded in January of this year.

But jobs were still running. It took me awhile to track it down, but through some sleuthing I solved the problem. First, I thought the msdb database might have filled up (though that event should have generated an error I’d have seen).  Nope.

Then I thought perhaps the table itself was full somehow. Nope, only about 32,000 records.  No luck.

I finally tried to run sp_sqlagent_log_jobhistory manually with some made up job information.

Msg 8115, Level 16, State 1, Procedure sp_sqlagent_log_jobhistory, Line 99
Arithmetic overflow error converting IDENTITY to data type int.
Arithmetic overflow occurred.

Now we’re getting someplace.  After a minor diversion of my own doing I then ran

DBCC CheckIDENT('sysjobhistory',NORESEED)

This returned a value of 2147483647. Hmm, that number looks VERY suspicious. A quick check of Books Online confirmed that’s the max value of a signed int.

Now, the simple solution, which worked for me in this case was to issue a

truncate table sysjobhistory

This removed all the rows in the table AND reset the IDENTITY value. Normally I’d hate to lose history information, but since this was 6 months old and seriously out of data it was acceptable. I could have merely reset the IDENTITY seed value, but there’s no guarantee I would not have then had collisions within the table later on. So this was the safest solution.

But wait, there was more. It kept bugging me that I had somehow reached the 2 BILLION row limit for this table. Sure, it handles log-shipping for about a dozen databases and as a result does about 48 jobs an hour, plus other jobs.  But for a year that should generate less than 1 million rows.  This database server hasn’t been running for 2 thousand years.

So, I decided to monitor things a bit and wait for a few jobs to run.

Then, I executed the following query.

select max(instance_id) from sysjobhistory

This returned a value along the lines of 232031.  Somehow, in the space of an hour or less, my sysjobhistory IDENTITY column had increased by over 232,000. This made no sense. But it did explain how I hit 2 billion rows!

So I started looking at the sysjobhistory table in detail. And I noticed gaps. Some make sense (if a job has multiple steps, it may temporarily insert a row and then roll it back once the job is done and put in a job completion record, and with the way IDENTITY columns work, this explains some small gaps). For example, there was a gap in instance_id from 868 to 875. Ok that didn’t bother me. BUT, the next value after 875 was 6,602. That was a huge gap! Then I saw a gap from 6,819 to 56,692. Another huge gap. As the movie says, “Something strange was going on in the neighborhood”.

I did a bit more drilling and found 3 jobs that were handling log-shipping from a particular server were showing HUGE amounts of history. Drilling deeper, I found they were generating errors, “Could not delete log file….”. Sure enough I went to the directories where the files were stored and there were log files going back to November.  Each directory had close to 22,000 log files that should have been deleted and weren’t.

Now I was closer to an answer. Back in November we had had issues with this server and I had to do a partial rebuild of it. And back then I had had some other issues related to log-shipping and permissions. I first checked permissions, but everything seemed fine.

I then decided to check attributes and sure enough all these files (based on the subdirectory attribute setting) had the R (readonly) value set. No wonder they couldn’t be deleted.

Now I’m trying to figure out how they got their attribute values set to R. (This is a non-traditional log-shipping setup, so it doesn’t use the built in SQL Server tools to copy the files. It uses rsync to copy files through an SSH tunnel).

So the mystery isn’t fully solved. It won’t be until I understand why they had an R value and if it will happen again.  That particular issue I’m still drilling into. But at least now I know why I hit the 2 billion row limit in my history table.

But, this is a good example of why it’s necessary to follow through an error to its root cause. All too often as an IT manager I’ve seen people who reported to me fix the final issue, but not the root cause. Had I done that here, i.e. simply cleared the history and reset the IDENTITY value, I’d have faced the same problem again a few weeks or months from now.

Moral of the story: When troubleshooting, it’s almost always worth taking the time to figure out not just what happened and fixing that, but WHY it happened and preventing it from happening again.

 

Experimenting

There are times when you have to take at face value what you are told.

There are 1.31 billion people living in China. This according to several sources (that all probably go back to the same official document from the Chinese government.)  I’m willing to believe that number. I’m certainly not going to go to China and start counting heads. For one, I don’t have the time, for another, I might look awfully weird doing so. It’s also accurate enough for any discussions I might have about China. But if I were going to knit caps for every person in China I might want a more accurate number.

That said, sometimes one shouldn’t take facts at face value. A case in point is given below. Let me start out with saying the person who gave me this fact, wasn’t wrong.  At least they’re no more wrong than the person who tells me that the acceleration due to gravity is 9.8m/s².  No, they are at worst inaccurate and more likely imprecise. Acceleration due to gravity here on Earth IS roughly 9.8m/s². But it varies depending where on the surface I am. And if I’m on the Moon it’s a completely different value.

Sometimes it is in fact possible to actually test and often worth it. I work with SQL Server and this very true here. If a DBA tells you with absolute certainty that a specific setting should be set, or a query must be written a specific way or an index rebuilt automatically at certain times, ask why. The worst answer they can give is, “I read it some place.”  (Please note, this is a bit different from saying, “Generally it’s best practice to do X”. Now we’re back to saying 9.8m/s², which is good enough for most things, but may not be good enough if say you want to precisely calibrate a piece of laboratory equipment.)

The best answer is “because I tested it and found that it works best”.

So, last night I had the pleasure of listening to Thomas Grohser speak on the SQL IO engine at local SQL Server User Group meeting. As always it was a great talk. At one point he was talking about backups and various ways to optimize them. He made a comment about setting the maxtransfersize to 4MB being ideal. Now, I’m sure he’d be the first to add the caveat, “it depends”. He also mentioned how much compression can help.

But I was curious and wanted to test it. Fortunately I had access to a database that was approximately 15GB in size. This seemed liked the perfect size with which to test things.

I started with:

backup database TESTDB to disk=’Z:\backups\TESTDB_4MB.BAK’ with maxtransfersize=4194304

This took approximately 470 seconds and had a transfer rate of 31.151 MB/sec.

backup database TESTDB to disk=’Z:\backups\TESTDB_4MB_COMP.BAK’ with maxtransfersize=4194304, compression

This took approximately 237 seconds and a transfer rate of 61.681 MB/sec.

This is almost twice as fast.  While we’re chewing up a few more CPU cycles, we’re writing a lot less data.  So this makes a lot of sense. And of course now I can fit more backups on my disk. So compression is a nice win.

But what about the maxtransfersize?

backup database TESTDB to disk=’Z:\backups\TESTDB.BAK’

This took approximately 515 seconds and a transfer rate of 28.410 MB/sec. So far, it looks like changing the maxtransfersize does help a bit (about 8%) over the default.

backup database TESTDB to disk=’Z:\backups\TESTDB_comp.BAK’ with compression

This took approximately 184 seconds with a transfer rate of 79.651 MB/sec.  This is the fastest of the 4 tests and by a noticeable amount.

Why? I honestly, don’t know. If I was really trying to optimize my backups, most likely I’d run each of these tests 5-10 more times and take an average. This may be an outlier. Or perhaps the 4MB test with compression ran slower than normal.  Or there may be something about the disk setup in this particular case that makes it the fastest method.

The point is, this is something that is easy to setup and test. The entire testing took me about 30 minutes and was done while I was watching tv last night.

So before you simply read something on some blog someplace about “you should do X to SQL Server” take the time to test it. Perhaps it’s a great solution in your case. Perhaps it’s not. Perhaps you can end up finding an even better solution.

 

 

 

 

On Call

I want to pass on a video I’ve finally gotten around to watching:

Dave O’Conner speaks

I’ve managed a number of on-call teams to various levels of success. One point I’d add that makes a difference is good buy-in from above.

He addresses several good points, most of which I would fully agree with and even at various times adopted at my various jobs.

One thing he mentions is availability.  Too often folks claim they need 99.999% uptime. My question has often been “why?” and then followed by, “Are you willing to pay for that?”  Often the why boils down to “umm.. because…” and the paying for it was “no”, at least once they realized the true cost.

I also had a rule that I sometimes used: “If there was no possible response or no response necessary, don’t bother alerting!”.

An example might be traffic flow.  I’ve seen setups where if the traffic exceeds a certain threshold once in say a one hour period (assume monitoring every 5 seconds) a page would go out.  Why? By the time you respond it’s gone and there’s nothing to do.

A far better response is to automate it such that if it happens more than X times in Y minutes, THEN send an alert.

In some cases, simply retrying works.  In the SQL world I’ve seen re-index jobs fail due to locking or other issues.  I like my sleep.  So I set up most of my jobs to retry at least once on failure.

Then, later I’ll review the logs. If I see constant issue of retries I’ll schedule time to fix it.

At one client, we had an issue where a job would randomly fail maybe once a month.  They would page someone about it, who would rerun the job and it would succeed.

I looked at the history and realized simply by putting a delay in of about 5 minutes on a failure and retrying would reduce the number of times someone had to be called from about once a month to once every 3 years or so.  Fifteen minutes of reviewing the problem during a normal 9-5 timeframe and 5 minutes of checking the math and implementing the fix meant the on-call person could get more sleep every month. A real win.

Moral of the story: Not every thing is critical and if it is, handle it as if it is, not as a second thought.

Link

As a middle manager in several start-ups I’ve had to deal with being short of resources of all kinds.  But, at the height of the first dot-com bubble, I had a great team.  No, not all of them were equals, but each pulled their weight and each could be relied on to perform well, in their area of expertise.

One guy, was a great troubleshooter.  He’d leave no stone unturned and you could tell if a problem was bothering him since he’d fixate on it until he understood it AND had solved the root cause.  It wasn’t good enough for him to fix the current problem. He wanted to make sure it couldn’t happen again.  However, what he wasn’t good at, was the rote, boring procedures. “Install this package in exactly this way, following these steps.” He’d tend to go off script and sometimes that caused problems.

On the other hand, I had another guy who was about 2 decades older and not from an IT background.  Troubleshooting wasn’t his forte and he honestly didn’t have the skill set to do a great job at it.

However, he excelled at the mundane, routine, rote tasks. Now this may sound like a slight, but far from it. The truth is in most cases in IT, you’re dealing with the routine, rote tasks. In an ideal world, you might ever have emergencies.

Now, this wasn’t to say he couldn’t solve most problems as they came up.  Simply if it was overly complex or rather obscure, it wasn’t his forte.

I learned when I wanted to get stuff done, that assigning the routine stuff to him worked far better than assigning it to the first guy.  And just the reverse.  If I had some weird problem I needed debugged that wasn’t easy to solve, the first guy was the guy to through at the problem.

Each excelled in their own way and the team did best when I remembered how to best utilize their talents.

 

 

Link

Ok, perhaps this isn’t Buckland, but alerts can be important.

I’ve been wanting to write something on alerts for quite awhile, and this isn’t quite it. Rather I’ll reference another URL on alerting.  This sums up quite well much of what I’ve wanted to say for awhile.

To single out one important rule: if you’re going to get an alert, be prepared to act upon it!

Knowing that you pegged your server CPU at 100% every once in a while might be useful, but probably not something to wake people up about.  And if it is hitting 100% infrequently, there’s probably nothing worth doing. On the other hand, if it’s routinely hitting 100% CPU, perhaps your action plan is to spin up another web server, or move load to a different database. Or, perhaps your plan is even to do nothing. But, planning to do nothing and accepting that, is very different from not planning and simply do nothing because you have no idea of waht to do.

Note, alerting is very different from monitoring and logging.  If my CPU is hitting 100% once a week for 5 seconds, and then twice a week for 6 seconds, and then 4-5 times a day, for 10 seconds, I want to start making plans. But again, I probably don’t want to wake someone up.

Monitor, yes. Alert: maybe.

That’s it for tonight.