Tag Archives: Deduplication

Source backup file has different block size

After changing the compression and deduplication settings for an existing backup job to be more aggressive, (which I detailed in my previous post)  I ran into the following problem ‘Source backup file has different block size’.

Backup Copy Error

By changing the deduplication setting I have altered the block size of the very next active full backup file created. This appears to have upset my backup copy job which used the backup job as a source. After selecting an ‘Active Full’ on my failing backup copy job it resolved the problem.

 

Changing Data Compression and Deduplication settings

So you probably already knew that you can change compression and deduplication settings for existing backup jobs, the new settings will not have any effect on previously created backup files in the chain. They will be applied to new backup files after the settings were created.

For deduplication, the changes take effect after you create an active full backup.

For Compression, the change takes effect to the very next backup files created.

Something that you may not have known is that if you use the reverse incremental backup method, the newly created backup files will contain a mixture of data blocks compressed at different levels.

Let’s say, you are backing up using reverse incremental with the compression set to ‘None’. After several job sessions, you wish to increase the compression by changing from ‘None’ to ‘Optimal’. Now, for reverse incremental backup chains, the full backup file is rebuilt with every job session to include new data blocks. As a result, the full backup file will contain a mixture of data blocks: data blocks compressed at the ‘None’ level and data blocks compressed at the ‘Optimal’ level.

If you want the newly created backup file to contain data blocks compressed at one level, you can create an active full backup. An active full backup will consist of retrieving all the data for the whole VM image from the production infrastructure and compress it at the new compression level. All subsequent backup files in the backup chain will also use the new compression level.

For space-saving goodness, I recommend checking out ReFS and how Veeam can leverage it, you can learn more about ReFS here https://hyperv.veeam.com/blog/benefits-of-refs-file-system-in-windows-server-2016/

Veeam v9 Upgrade Gotchas

Things to rememeber when upgrading to Veeam V9

Deduplication

  • The Local target (16 TB + backup files)  – If you upgrade to Veeam Backup & Replication 9.0 from the previous product version, this option will be displayed as Local target (legacy 8MB block size) in the list and will still use blocks size of 8 MB. It is recommended that you switch to an option that uses a smaller block size and create an active full backup to apply the new setting. https://helpcenter.veeam.com/backup/hyperv/compression_deduplication.html

BitLooker

  •  For users upgrading from previous versions: By default, BitLooker will be enabled for newly created jobs upon upgrade. However, it will not be automatically enabled on existing jobs to ensure the jobs do not change existing behaviors. BitLooker can be enabled manually in the advanced job settings or by using a PowerShell script.  Link to Powershell Script

SQL Express Upgrade

  • Veeam Backup & Replication provides an option during installation to create a local SQL Express instance. This option is often taken and as such older versions of Veeam that are then updated may be running on old SQL Express instances. This can affect performance. It may be advised to upgrade the local SQL instance to the most current supported version after first upgrading Veeam to the latest version and update. All versions of Veeam Backup & Replication after 8.0.0.917 (Patch 1) support SQL 2014. https://www.veeam.com/kb2053