Recently encountered this issue after attempting to install Veeam B&R v9 on a server that at one stage in the servers life had an instance of Veeam B&R installed then uninstalled.
I cannot comment on how it was uninstalled but my very first installation of Veeam B&R went smoothly, I utilised the existing Veeam DB and ran up a couple backup jobs. Unfortunately after a couple days, several issues started to happen. Essentially the existing DB was no good and I needed to wipe the slate clean.
Unfortunately, several unsuccessful installations later, Veeam was now having a problem when creating the database during installation.
The exact error was ‘Failed to create database VeeamBackup.Create failed for database VeeamBackup’
What finally fixed the problem was to uninstall all Veeam B&R & SQL components then manually delete the left-over Microsoft SQL folder in the program files to resolve the issue.
Have you ever tried to perform an instant VM recovery and the VM just refused to publish yet when you try smaller VMs it is successful?
The problem may be the Instant VM recovery lease is timing out which can be caused by a number of factors.
- VM with large disks attached
- Large backup chain
- Slow storage
- Deduplication on the storage
To increase the lease timeout, you need to create two registry entries which go under ‘HKLM\SOFTWARE\Veeam\Veeam Backup and Replication’
IrMountLeaseTimeOut (REG_DWORD) value (default one is 30 minutes).
To take effect, you must restart the Veeam Backup & Replication service.
So it turns out there is a free tool that allows you to expose physically connected tape devices via ISCSI protocol. This could be really good news for those forced to use a physical Veeam server to attach to your tape library.
vSphere 6.0 vCenter Server Appliance now has the same scalability numbers as the windows vCenter Server. There really isn’t many reasons left why some organisations will not want the appliance over the windows vCenter. vCenter Update Manager requiring an additional windows server perhaps?
I was recently working on a VMware SRM solution utilising an IBM Storwize v3700 SAN at each site with remote mirror. I had configured a global mirror with change volume relationship over IP which works beautifully when both source and target SANs were in the same subnet/building. Once the target SAN was moved out of the building to the DR site, the IP SAN traffic went through the gateway across the WAN to the other site. The performance for the IP replication was pretty average, to say the least, out of a 100Mbps link, I could only achieve 1MBps, even though windows file copy would easily saturate the link providing 10MBps consistently.
We ended up testing the link for packet loss and while it did show some packet loss, I had assumed that given that Windows file copy could achieve 10MBps then the Storwize SAN from IBM should be able to achieve similar results.
Well, it turns out that’s incorrect. As per the below snippet from the IBM System Storage SAN Volume Controller and Storwize V7000 Replication Family Services Redbook.
packet loss results in sever performance degradation well out of proportion to the number of packets actually lost. A link that is considered “high quality” for most TCP/IP applications might be completely unsuitable for the remote mirror.
Now, this helped explained why Veeam could perform Backup Copy to the DR site at a consistently fast 10MBps without any problems yet the remote mirror performed so badly.
After resolving the packet loss problem which was dodgy SFP+ fibre adapters and a couple cables, the remote mirror performance jumped straight to 10MBps and has stayed there consistently ever since.