Post-Migration to Veeam – Considerations for your Legacy Backup Solution

I’ve completed my fair share of Veeam deployments in environments where there is an existing backup solution.
The question that comes up the most is, what do I do with my legacy backup data?

Well, here are my thoughts around best practices for this situation.

Option 1 Perform restores of protected data using the legacy backup software for selected restore points to a staging area, once Veeam has re-protected the VMs the legacy backup solution can be retired. VeeamZip is a great option here.


  • Removal of the legacy backup solution


  • Time-consuming if re-protecting a large amount of VM data
  • Very time-consuming if restoring from tape
  • Can be complicated if dealing with large amount of restore points & VMs
  • Requires a staging area to restore the VMs to

Thoughts: I see this option used when it’s not possible for the legacy backup data to be simply left to expire. Perhaps the retention period is too great or restore requests are frequent from legacy restore points.  This method is not very common requires as it requires a lot of time and resources to reprotect the restored data in Veeam.

Currently there is no Veeam migration tool to migrate legacy restore points to Veeam automagically.

Option 2 Suspend existing backup solution and maintain existing legacy backup data in the event of a restore being required before Veeam was implemented. Any backup data that expires/passes their retention period can be deleted to reclaim space.


  • Less work and much easier


  • Existing backup solution is taking up resources, if installed on a physical server then it is taking up space, if it’s powered on then cooling & power costs. If it’s a virtual machine it is taking up disk space on your production storage.

Thoughts: This is the most common option taken in my experience, any legacy licenses aren’t renewed and the the legacy backup data is left to expire.

Poor Performance & Power Management on VMware

Poor performance experienced by your VMs may be related to processor power management implemented either by ESXi/ESX or by the server hardware.

One real world case I recently encountered with a customer involved VMware Horizon View and large delays experienced by their end users. Applications took unusually long times to open and general performance was quite bad. This was quite apparent when comparing the same applications on a thick client to running it on a virtual desktop.

After running through the usual checks consisting of VMware Health Analyzer, checking for over-subscription and over-utilisation there were no red herrings immediately apparent. What we did discover though (which is detailed in ‘Best Practices for Performance Tuning of Latency-Sensitive Workloads in vSphere VMs’) involved changing a BIOS setting on all of the ESXi hosts. Specifically the setting for power management on the ESXi HP Hosts to “Static High”, that is, no OS-controlled power management.

While we are working through the other recommendations provided in the VMware Health Analyzer report and have already made some changes to the configuration, nothing has resulted in a noticeable improvement with the exception of the power management setting. The customer has reported that after changing this particular power setting it has provided the most significant improvement in performance of anything previously attempted (hardware and software).


Progress Controller: [VCSA ERROR] – Progress callback error

Deploying vCenter Server Appliance 6.0 and run into this beauty,

‘Progress Controller: [VCSA ERROR] – Progress callback error’

Turns out the vCenter Server Appliance installer will fail if more than one DNS server is provided. Fantastic…

To workaround, provide one DNS server IP during the installation wizard. Once the VCSA is installed and running you can then provide the secondary DNS IP.

Veeam Backup & Replication 9.0 Update 2 Released

Just a quick post to detail the new Veeam B&R v9 Update 2 that was released on the 5th of August.

As Gostev pointed out in his weekly digest,

“The main theme for this update is support for new platform support (for example CISCO HX, EMC Unity, vCloud Director 8.10), first wave of scalability enhancements (we’ve decided to backport a few isolated optimizations from 9.5) as well as bug fixes to address common support issues. ”

Couple important notes to point out are,

  • It is recommended to reboot the Veeam server and when the reboot is done, please stop all the Veeam jobs and services before applying the update.
  • After installing the update, during the first start of the Veeam Backup Service, required modifications will be made to the configuration database automatically to optimize its performance. These modifications may take up to 10 minutes to complete. Please do not reboot the Veeam server, or attempt to stop the service during this operation.
  • Please confirm you are running version or prior to installing this update.

Release Notes

Please confirm you are running version or prior to installing this update. You can check this under Help | About in Veeam Backup & Replication console. If you are using partner preview build, you must upgrade to GA build first by installing Day 0 Update > KB2084

After upgrading, your build will be version

Prior to installing this update please reboot the Veeam server to clear any locks on the Veeam services and when the reboot is done, please stop all the Veeam jobs and services before applying the update.

After installing the update, during the first start of the Veeam Backup Service, required modifications will be made to the configuration database automatically to optimize its performance. These modifications may take up to 10 minutes to complete. Please do not reboot the Veeam server, or attempt to stop the service during this operation. If there is concern regarding the time that the Veeam Backup Service takes to start after upgrade, please contact Veeam Customer Support.

Once Veeam Backup Service starts, please open the console and allow Veeam Backup & Replication to update its remote components.

As a result of on-going R&D effort and in response to customer feedback, Update 2 includes over 300 enhancements and bug fixes, the most significant of which are listed below:

New platform support

  • Cisco HyperFlex HX-Series support for Direct NFS backup mode.
  • EMC Unity support for Backup from Storage Snapshots and Veeam Explorer for Storage Snapshots functionality.
  • EMC Data Domain DD OS 5.7 support for DD Boost integration.
    ExaGrid is now the minimal ExaGrid firmware version supported.
  • NetApp Data ONTAP 8.3.2 support.
  • VMware vCloud Director 8.10 support.
  • VMware VSAN 6.2 support.


  • Backported a number of isolated Enterprise Scalability enhancements from 9.5 code branch to improve transaction log backup, tape backup and user interface performance.
  • Updated OpenSSH client to version 7.2 to enable out of the box support for modern Linux distributions.
  • Improved iSCSI target performance (iSCSI target is used to mount backup remotely in certain file-level and item-level recovery scenarios).
  • iSCSI mount operations are now retried automatically to workaround occasion “The device is not ready” errors which happen when mount operation takes too long. By default, the mount is retried 6 times every 10 seconds. To change the number of retries, create IscsiMountFsCheckRetriesCount (DWORD) registry value under HKLM\SOFTWARE\Veeam\Veeam Backup and Replication key on the backup server.

Backup Copy

  • To reduce backup server load, Backup Copy jobs targeting shared folder or deduplicating appliance backed repositories with the gateway server setting set to Automatic selection will now start data moves on the mount server associated with the backup repository (as opposed to the backup server). In cases when the mount server is unavailable, the data mover will be started on the backup server as before.
  • Backup Copy performance should now be more consistent due to preserving backup files cache when the job is switching to idle mode.
  • Minor reliability improvements in GFS full backup creation algorithm.

Microsoft Hyper-V

  • Find-VBRHvEntity cmdlet performance has been improved significantly when used against Hyper-V cluster.
  • Backup infrastructure resource scheduler should schedule Guest Interaction Proxy resource dramatically faster in large infrastructures (for example, 5 seconds instead of 15 minutes).

Microsoft SQL Server

  • Improved performance and reduced resource consumption of Microsoft SQL Server transaction log backups.

Microsoft Exchange

  • Added ability to force CAS server for Veeam Explorer for Microsoft Exchange (instead of automatically detecting one) via DefaultCASServer (DWORD) registry value under HKLM\SOFTWARE\Veeam\Veeam Backup and Replication key on the backup server.
  • Added ability to change the order of Exchange autodiscovery policies for Veeam Explorer for Exchange (support only setting).


  • Added ability to restore Oracle databases while preserving certain parameters which are critical to RMAN in scenarios such as database name change via new parameter in PFileParameters.xml file of Veeam Explorer for Oracle.

Cloud Connect Replication

  • Support for Planned Failover functionality with cloud replicas. You can now perform planned failover to achieve zero data loss, for example when a natural disaster can be predicted in advance.

Veeam Cloud & Service Provider Partners

  • Update 2 introduces important changes and fixes around rental licensing, including a pilot functionality of usage reporting directly from the user interface. For additional information, as well as the list of other service provider specific enhancements and bug fixes included in this update, please refer to the issue tracking topic in the private VCSP forum. If you are VCSP but don’t have access, please apply to Cloud & Service Providers group using Veeam forum’s User Control Panel.

Veeam Agent for Linux Beta

Do you have any Linux machines that are not virtualised or Veeam cannot reach the hypervisors they run on even when they are virtualized, such as in public cloud environments?

Protecting these workloads can be difficult and cumbersome to manage, often you would need to rely on the public cloud providers backup solution (at additional cost of course) or use a traditional backup solution in lieu of Veeam. Well, that is all about to change with the new Veeam Agent for Linux which is now available in Beta.

Using the Veeam Agent for Linux allows us to backup Linux workloads even if they are physical servers or running on public cloud where traditionally the hypervisor is hidden away from Veeam.

What’s great about this product is that it’s completely free! This will enable users to finally get rid of those last few traditional backup licenses used to protect physical Linux workloads. So not only is the Veeam Agent for Linux free but it can also save you money!

Couple notes;

  • It is distributed as RPM and DEB packages.
  • It supports any Linux kernel from version 2.6.32 and above as long as you use the default kernel of your distribution, which means even old installations can be protected.
  • Both 32-bit and 64-bit kernels are supported.
  • Integrates with Veeam Backup & Replication to use existing backup repositories as target locations
  • Performs image-based backups from inside the linux guest, both at the file level and the volume level.

If you want to learn more, test it and contribute to improving it before version 1.0, sign up for the Public Beta here;

Synology NAS as a Linux Repository?

So it turns out that it’s possible to use a Synology NAS as a “native Linux repository” within Veeam. Usually, the Synology NAS would just be configured as a CIFS (SMB) target or better yet accessed via iSCSI attached to a managed server.

Now, this is good to know as CIFS repositories do not have agents that are installed on the storage to help manage data moving. Due to being unable to properly use CPU/RAM to help on-storage operations (Reverse incremental/transforms/merge operations) CIFS is generally regarded as the slower of the three options. By adding a Linux server as a backup repository, the target Data Mover Service is installed on this Linux server which should improve the performance versus a CIFS backup repository.

Now I wouldn’t recommend running this just yet in a production environment. I’ve read that users have reported success only for the next DSM update to break the repository.

If you are keen to try it out, there are a couple of requirements;
1. The Synology NAS must have Perl installed
3. The root password should not contain certain symbols (such as space)
4. The Synology needs to be x86/Intel, if its ARM then you are out of luck…
5. Should have at least 2GB of RAM, 4GB or more is recommended

For step by steps instructions check out Jim Millars blog post here

You should see better speeds when using the Synology NAS configured as a Linux repository compared to a CIFS as backup data being transferred over the network connection will now be compressed and repository maintenance tasks are performed from inside the Synology NAS instead of being transported back to the managed server.

Failed to create database VeeamBackup. Create failed for database VeeamBackup.

Recently encountered this issue after attempting to install Veeam B&R v9 on a server that at one stage in the servers life had an instance of Veeam B&R  installed then uninstalled.

I cannot comment on how it was uninstalled but my very first installation of Veeam B&R went smoothly, I utilised the existing Veeam DB and ran up a couple backup jobs. Unfortunately after a couple days, several issues started to happen. Essentially the existing DB was no good and I needed to wipe the slate clean.

Unfortunately, several unsuccessful installations later, Veeam was now having a problem when creating the database during installation.

The exact error was ‘Failed to create database VeeamBackup.Create failed for database VeeamBackup’

What finally fixed the problem was to uninstall all Veeam B&R & SQL components then  manually delete the left-over Microsoft SQL folder in the program files to resolve the issue.


Backing Up VMware AppStacks/App Volumes

Recently I was working on a project that involved VMware App Volumes, otherwise known as AppStacks. The basic premise behind AppStacks is that they facilitate application delivery to virtual desktops. For this particular project, it involved over 100 applications being delivered to nearly 500 desktops across two datacenters.

Now AppStacks are just virtual machine disk files (.VMDKs) that contain one or more shareable applications, the applications in question could be relatively simple applications like Adobe Reader or more complex applications that have several dependencies and licensing requirements. These VMDKs are attached to virtual machines to enable the application to be run on many VMs at once, in essence, AppStacks are shareable, one-to-many, read-only volumes. Using AppStacks enables quick, secure and inexpensive cloud computing and virtualization management, of thousands of applications and virtual machines as if they were one.

The applications themselves become portable objects that can be moved across data centers or the cloud and then shared with thousands of virtual machines quickly and easily.

Now, one of the current challenges around AppStacks is how do you protect them? The VMDK files themselves cannot be easily backed up as Veeam only backs up the virtual machine and any VMDK files attached to that virtual machine. AppStacks are not virtual machines, they are purely VMDK files are temporarily attached to many virtual machines. The VMDK files are attached to any number of VMs at any given time and they only remain attached as long as the user is logged into the machine. To complicate it further, users will usually have a different variety of VMDKs attached based on their needs and role within the business.

So how can I protect my AppStack volumes? Well, if your storage vendor can back up your VMDK files at the datastore level, you can try that, but not all storage vendors allow that kind of backup.

Alternatively, the AppVolume Manager does allow the creation of ‘Storage Groups’ to replicate AppVolumes between multiple datastores. The issue remains that this type of replication is not a good backup, for example, storage failure or if someone were to delete an AppStack then it’s going to delete the AppStack volume on both datastores. The interval for replication is apparently 1h by default.

Another option is to utilise the VMware fling that was recently made available by Chris Halstead,
Dale Carter & Stephane Asselin. The fling connects to both the App Volumes Manager and Virtual Center using API calls to create a backup virtual machine, the underlying VMDK files of selected AppStacks and Writable Volumes are attached to the backup virtual machine. Once the VMDKs are attached to this ‘backup virtual machine’, Veeam can then back up the VMDK files.

Unfortunately, this method requires manually assigning each AppStacks to a backup VM, anytime someone creates a new AppStack you will need to open up the fling and attach it to a backup VM to be protected. We were after a more automated solution and due the very nature of flings, we were hesitant to utilise this in a production environment. Our solution was to copy them using SCP, since AppStacks are read only, corruption should not be an issue so backing them up should be as simple as just copying them to another location.

Veeam did have a standalone product called FastSCP but this has since been integrated into the free version Veeam Backup. By utilising ‘File Copy’ Jobs within Veeam Backup we can copy these AppStack VMDKs using SCP. The ‘File Copy’ Job can be easily configured with a schedule and provides the bonus of traffic compression and empty block removal to reduce copy times and improve performance. Veeam also generates a one-time username and password for each file transfer session. For example, if you have to copy 3 VMDK files, the program will change credentials 3 times.

The last challenge we encountered was how to perform SCP operations from a vSAN datastore. In the end, we used the ‘Storage Group’ capability of the AppVolume Manager to replicate AppStacks to another datastore that wasn’t a vSAN then Veeam performed a ‘File Copy’ which copied the VMDKs to a safe location protected by an existing backup solution. Since our only datastore was a vSAN datastore we configured an ‘OpenFiler’ VM to provide an iSCSI LUN to run up another datastore. Technically it’s a nested datastore but we only care about enabling access to the AppStacks VMDKs so we can copy them.

Do you know of a better way to protect AppStack volumes? Let me know in the comments section.

Disclaimer: AppVolumes v2.1 was the version used, v3.0 has since been release and may solve some of these problems.

Remove from Configuration

One minor change to v9 Backup & Replication console I wanted to mention is the rewording of the previous ‘Remove from backups’ operation which is now called ‘Remove from configuration’.

remove from backups

The old ‘Remove from backups’


The new ‘Remove from configuration’

This should help eliminate some of the confusion users might experience when needing to just remove records about backups or replicas from the Veeam Backup & Replication console and configuration database. The backup files themselves (VBK, VIB, VRB, VBM) still remain in the backup repository.

You could reimport these files at a later date and perform restore operations. Replicated VMs also remain on target hosts. If necessary, you can start them manually after the Remove from configuration operation is performed.

Mind the following:

  • [For VM backups] When you remove an encrypted backup from the configuration, Veeam Backup & Replication removes encryption keys from the configuration database, too. If you import such backup on the same backup server or another backup server, you will have to specify the password or unlock the backup with Veeam Backup Enterprise Manager. For more information, see Importing Encrypted Backups.
  • [For VM replicas] The Remove from configuration operation can be performed only for VM replicas in the Ready state. If the VM replica is in the Failover or Failback state, this option will be disabled.


Source backup file has different block size

After changing the compression and deduplication settings for an existing backup job to be more aggressive, (which I detailed in my previous post)  I ran into the following problem ‘Source backup file has different block size’.

Backup Copy Error

By changing the deduplication setting I have altered the block size of the very next active full backup file created. This appears to have upset my backup copy job which used the backup job as a source. After selecting an ‘Active Full’ on my failing backup copy job it resolved the problem.