Wednesday, March 28, 2012

Network Backup Again - What are the Pros and Cons

Thanks for the help on the previous thread.

It seems to me that either by accident or design, SQL Server tends to steer you away from backing up directly over the network.

Are there reasons for not doing this because you obviously don't want to leave your backups on a local drive in case the drive fails?

Some possibilties that I can think of are:-

1. Local drives have faster access times and SQL backups can get quite large. I did a quick test and found that a netwrok backup takes 2 to 3 times longer than it does on a local drive.

2. Backing up on the netwrok could hog too much bandwidth. I haven't tested this and would be surprised it it's true.

3. There could be some reason that you don't want the Server and Agent services running under a domain account but want to leave them on the Local System account. I am not aware of any such reasons by the way.

4. Local drives persumably have a slightly higher availability than network drives. If the server is running, the drive should be available.

In case your drive fails you'll need your backup to restore the db, in such scenarios you can make use of n/w backups.......
but you can have a copy of your backup in disk.......use the copy command to move it to n/w also so that you have it @. 2 places in the event of disaster at any one place.......take the backup file from the disk to tape if possible.........yes backup taken to a local drive will be faster...moreover you need to have the sql service account changed from local system in order to access the n/w resources........
|||

1 and 2 are valid reasons. On x86 there are OS limitations to how much data can be backed up over the network due to virtual address space issues. Historically, networks have been less reliable than local hard disks and our documentation may still reflect that perception. That may not always be the case anymore.

A common practice is to backup locally, compress and copy remotely, then validate that the copy did not change the bytes before removing the local backup.

|||

Thanks, your reply was very helpful.

It had occurred to me that the reasons could be partly historic, another example is that there was a time when tape backup was more or less the norm and the tape drives were usually local devices.

Is there a preferred compression utliity (WinZip or Windows compression for example)? The more paranoid amongst my associates say why risk compressing the backups only to find that it can't be decompressed when you need it. There are obvious advantages though when copying over the network and in storage space savings.

Backup files DO compress very effectively which means that SQL Backup doesn't do any compression. It might be useful if SQL Backup/Restore included an option to compress/decompress the files.

Are there any reasons why you might not want to change the login for the MSSQLSERVER service? Is the IT department likely to think of reason to object, for example you would need an account with a password tha never expires? Such an account would be necessry even to just copy the files afterwards of course.

My solution by the way is a VBScript that copies the files, checking that they don't already exist at the destination, then deleting any files over a certain age keeping only the latest backupon the Server and maybe thtree or four on the network. Adding compression might be a good idea.

|||

We expect to add a compression capability to Backup in the next release.

I am not familiar with what existing customer use today.

|||

I am thinking of marking the backup folder to be compacted (NTFS uses the word compact rather than compress). I THINK that the backup files would automatically compacted by NTFS immediately after the file has been created. I have already checked that you can point to a compressed backup file when restoring and it is automatically un-compacted during the restore.

Using native NTFS functionality means no reliance on a third party software, no need to explicitly compact, no change of file name (for file copying) and transparent un-compacting during restore.

NTFS seems to compact to 25%, WinZip is closer to 12.5%.

Adding compression to Backup/Restore seems a really logical step, I assume that it would approach the 12.5% compression ratio.

|||Note that if you do back up directly to a compacted folder, you'll be taking the hit of compression as part of the backup operation. This could make backups run slower. Other than that, it should work fine.|||

I tried this out and you are correct, the backup takes longer (about twice the time in my simple test). There is a cost but the file space is reduced to about 25% of what it would have been.

Part of the aim was to save band width when copying the file from the local drive to the network and it seems to fail in this regard. If the destination directory is not compacted the is file decompressed when copied :-(. If the destination directory IS compacted there still seems to be no saving in copying time and therefore, I assume, band width (these can be very large files). Tell me it isn't so, but I suspsect that the files is decompressed, copied and compacted again.

It might make sense to have the network directory compacted so that you can keep the last few backups and save space on the network. I think that you would only compact on the local drive if you were really short of space and you would not want to live with that situation on a SQL Server anyway.

If I used a third party compression tool I assume that the compressed file would be copied but that is not something I want to do. The last thing you need under the pressure of distaster recovery is some third party software to deal with,

I think that we can all look forward to compression being included in backup/restore in the next SQL Server release (2008?).

No comments:

Post a Comment