Ok, apologies to my loyal reader or two for not blogging in a LONG time. Either life has been too busy, or when things have calmed down, I haven’t had anything to blog about.
Normally I don’t want to blog weird technical stuff, but this time I will.
I’m currently working for a client doing some DBA work. Lots of fun actually.
Said client has an issue. They want to get have a standby datacenter. Great idea. Log-shipping will work well here for them. Just one catch: corporate won’t allow anything through the firewall unless it’s SSH. Hmm.
No real problem, I figure I can do “log-shipping in the blind”. Basically take the transaction logs from the primary server, use rsync to get through the firewall and apply them to the secondary server. If need be, I’ll custom write the jobs. It should be easy.
Here’s one example:
The key part (for me) is the part there “Half a Log-Shipping Config”
Pretty straightforward, no DBA would balk at that.
So, I set it all up. Setup some scheduled tasks. Make a backup, copy it to the secondary server and restore it. So far so good. Take logs created by existing log backup job, and watch as they automatically get shipped to the datacenter.
“Hey, this is going great. I’ll be done before lunch.”
Manually apply a log or two to make sure there are no issues.
“Works perfect. Lunch is starting to look real good. Maybe I’ll break early and go out.”
Setup scheduled task to run the job created above.
Fails. “Oh fudge. (Hmm, maybe some fudge after lunch?)”
Oh wait, just a typo in the directory.
Rebuild job. Run it. Success!
Great, let’s check things out.
“Hmm, the .TUF (transaction undo file) isn’t there.”
Let’s look at the jobs.
Now here, you’ll have to bear with me. Again, the corporate rules will NOT permit me to cut-paste even simple text error messages from a server in the datacenter.
But basically get a bunch of messages along the lines of:
2013-06-26 05:00:01.92 Skipped log backup file. Secondary DB: ‘MyDemo’, File: ‘\\NAS\Logshipping\MyDemo_201306261156.trn’
A bunch of these.
Well, a lot of Googling suggested that since log flies were empty (this database doesn’t get much traffic) SQL Server was smart enough to know there was nothing to apply.
Sure enough, manually applying them showed there was nothing for them to do. I needed a real transaction log with a transaction in it. No problem. Go to original database. Do a quick update. Create transaction log and wait for the automated log copier to get it to the right place.
“Hmm, maybe I’ll just do a late lunch today.”
Get it to the secondary server. Run the job.
“Hmm. Skipped the files I expected it to skip. No problem.”
Gets to the file I expect it to apply:
(now I’m retyping rather than cutting/pasting)
2013-06-26 13:14:05.43 Found first log backup to restore. Secondary DB: ‘MyDemo’; File:
2013-06-26 13:14:05.44 The restore option was successful. Secondary Database ‘MyDemo’ Number of log backup files restored: 0
Ayup. You read it right. It found the right file. It said the restore was successful. Then it said 0 files were restored.
Now, I eventually broke for lunch. And dinner. And bed. And breakfast. And another lunch and dinner and more sleep. Also a couple of bike rides and a bunch of meetings and other stuff.
But truth is, I was stumped. I tried everything. Rebuilt the job. Tried manually updating the tables on the secondary to tell it I had already applied a log (that got me a weird datetime error I never quite figured out)
I could manually apply the logs just fine.
Log-shipping between two servers within the datacenter worked just fine.
Why wasn’t this working? I mean a log file is a log file right?
Well, apparently yes and no.
The ONE difference I noticed was that the transaction logs from the working test had their filenames using UTC time stamps, including seconds.
The ones I was shipping were from a standard maintenance plan I had setup and times were in local time w/o the time stamp.
BTW, this http://www.sqlservercentral.com/Forums/Topic351549-146-1.aspx#bm438143 is what also helped me wonder about this.
“It couldn’t be THAT simple could it?”
So I setup a database in the primary datacenter, full logging, and setup log-shipping on that side. Now, I still couldn’t provide any data about the secondary, but I could at least script the primary side that does the backups.
Set that up. Shipped over a database, some logs. Now I’d like to say it worked on the first try, but I ran into some issues (completely of my own doing, so not relevant here).
But 3rd time is the charm as they say. Got a backup over to the secondary. Let the autosync job move over some transaction logs (including several with transactions).
Then, on the secondary ran the job I had previously handcrafted and. SUCCESS.
So, yes, you can do “blind” log-shipping (which I knew).
You can setup the secondary by hand.
But apparently you can’t use your existing transaction log job. You’re better off setting up the log-backups on the primary using the normal tools and then shipping those.
So lesson learned.
Sometimes, it’s enough to know there’s a right answer to keep you driving towards it.
And now time for dinner.