Whenever using Veeam Backup for Azure, there are two recommended options for creating multiple copies of the backup data. First is a backup copy job from an external repository. The second option is to rely on the native storage redundancy provided by Azure, such as geo-redundant storage (GRS) or read-access geo-redundant storage (RA-GRS). The purpose of this article is to explore this second option by leveraging RA-GRS for achieving backup redundancy across two Azure regions.
As a quick reminder, with RA-GRS and GRS, the data is stored in two separate locations, with one being the primary location and the other being a secondary location. The secondary location is located in a different geographic region than the primary location, ensuring that data is safe even in the event of a regional outage or natural disaster.
For the purpose of testing, A RA-GRS storage account was provisioned in Australia East from which Azure automatically configured the secondary region to Australia Southeast. This paired secondary region is determined based on the primary region and can’t be changed. Access to the storage account is via public endpoints.
I wanted to share this here as a handy reminder of how network mapping and Re-IP works.
For Microsoft VMs, Veeam Backup & Replication also automates the reconfiguration of VM IP addresses. If the IP addressing scheme in the production site differs from the DR site scheme, re-IP rules can be created for the replication job. When a failover occurs, Veeam Backup & Replication checks if any of the specified re-IP rules apply to the replica. If a rule applies, Veeam Backup & Replication mounts the VM disks of the replica to the backup server and changes its IP address configuration via the Microsoft Windows registry, all in less than a second. If the failover is undone or if you fail back to the original location, the replica IP address is changed back to its pre-failover state.
The replication process of a VM typically uses the same network configuration as the original VM, but if the DR site has a different network setup, a network mapping table can be created for the replication job. This table maps the source networks to the target networks. During each replication job run, Veeam Backup & Replication checks the original VM’s network configuration against the mapping table. If the original VM network matches a source network in the table, Veeam Backup & Replication updates the replica configuration file to replace the source network with the target one. This ensures that the VM replica always has the correct network settings required by the DR site. In the event of a failover to the VM replica, it will be connected to the appropriate network.
If you’re seeing “tag is unavailable” errors when attempting to protect a VM using vSphere tags or “QueryAllCategoryInfos failed. There was no endpoint listening at https://vcenteraddress/lookupservice/sdk that could accept the message” error when editing the backup job.
Working with a customer who was troubleshooting the following error, ‘Failed to load content of the folder’ whenever attempting a file-level restore within Veeam Backup & Replication.
After investigating, we found the time configured on the windows OS running the backup repository role did not match the backup server, it was out by 5 minutes. After setting the correct time, the file restore performed as expected.
I’ve been troubleshooting a CDP issue in my home lab so I wanted to reinstall the I/O filter drivers. Attempting to remove the filter drivers using the VBR console was failing to place the ESXi host into maintenance mode since the VBR VM was running on that host and I had nowhere to vMotion it to (I used nested virtualisation with a single physical ESXi server).
I also wasn’t having much luck uninstalling the Veeam CDP VAIO filter driver manually as per the instructions in https://www.veeam.com/kb4151 but fortunately, there is another method to remove them.
1. Place ESXi host into maintenance mode
2. SSH to the ESXi host
3. Verify ‘veecdp’ filter driver exists with command, esxcli software vib list
4. Remove the VAIO filter driver with command, esxcli software vib remove -n veecdp
5. Take the ESXi host out of maintenance mode.
6. Go to VBR console and ‘Uninstall I/O filter’
7. The task will fail to uninstall the drivers as it longer exists but the console will now show it’s possible to ‘Install I/O filter’ again.
I’ve recently had a couple of issues when adding a standalone VBR (Veeam Backup & Replication) server to VDRO (Veeam Disaster Recovery Orchestrator). This is a quick write-up to cover the basic troubleshooting steps performed and how the problems were resolved.
The error thrown in VDRO is as follows; “Failed to connect to the server. Specified user is invalid or does not have enough permissions on the server.”
Tags are a great way to manage and organise resources across a vSphere environment, with tags we can sort and group VMs together based on any criteria we wish. Veeam can even leverage these “groups of VMs” in many different ways.
For example, Veeam Disaster Recovery Orchestrator leverages tags for orchestration plans and to make sure VMs are restored to the right place with tag-based recovery locations.
Veeam ONE Business View can automatically create and assign tags to VMs based on any desired criteria with its categorization engine, this is on top of VeeamONEs capability to run reports based on tags. I’ve previously written about Veeam ONE Business View which can be found here.
Veeam Backup & Replication (VBR) also can utilise tags for any job. By simply adding tags as the source object to the job, VBR will protect any VMs with that same tag, this can be especially time-saving if VMs are frequently being provisioned. Another benefit of tags is that, unlike folders and resource pools, VMs can be assigned multiple tags.
Let’s take a look at how we could group a selection of 8 VMs into various different backup jobs using tags.
I was recently involved with a Veeam deployment that was running into problems during testing, their only performance tier had run out of space. Though this wasn’t unexpected as the disk provisioned was undersized and just temporary until testing was finished, it was preventing new backups from finishing successfully.
The full performance tier belonged to a Scale-Out Backup Repository that was also configured with a capacity tier (copy + move mode) backed by an immutable AWS S3 bucket. Worth mentioning those backup files in the capacity tier were still within the immutability retention period.
According to the user guide“If you use the scale-out backup repository, keep in mind that the Delete from disk operation will remove the backups not only from the performance tier but also from the capacity and archive tier. If you want to remove backups from the performance tier only, you should move those backups to the capacity tier instead. For details, see Moving to Capacity Tier.”
Attempting to perform a “Delete from disk” operation was failing with the error “Error: Unable to delete backup in the Capacity Tier because it is immutable”.
Ransomware isn’t sexy but it’s certainly an important topic in today’s IT security landscape, with unprecedented growth and relentless evolution, organisations need to constantly keep one step ahead of bad actors eager to make a buck by exfiltrating and encrypting your important data.
Ransomware attacks mainly occur because IT is a complex and everchanging environment, organisations are busy modernising their applications from monolithic designs to new highly distributed container-based services. The remote workforce is now connected more than ever, with the ability to access data hosted across multiple cloud providers, connecting from virtually any device, at any hour of the day. IT departments are being asked to do more with less each and every year while ransomware attacks are on the rise and becoming more costly than ever before.
Beyond encryption of data, some ransomware is taking it a step further and making ransom of leaking data, this is otherwise known as data exfiltration. Unfortunately, stopping ransomware prior to an attack is difficult and at best, inconsistent. No single product or service has all the solutions to the challenges raised by these attacks, instead, it’s recommended to take a multi-layered approach. Apply best practices, keep systems up to date, enforce good data hygiene, configure event logging, and identify anomalies (indicators of compromise) to provide the best chance of discovering an attack as early as possible.
VeeamON 2022, the best data protection conference of the year is about to kick off in Las Vegas and virtually. This will be the 6th time Veeam has run the event with attendees able to access both virtually and in person.
VeeamON provides glimpses into product roadmaps, with hours of technical content ranging from demos to deep dives of virtually every Veeam product and feature. Hot topics such as v12, Ransomware, Kubernetes, cloud-native backups, Salesforce and Microsoft 365 will be covered by Veeam experts including fellow Veeam Vanguards.
For those who can’t make it in person, virtual attendance is free, running from May 16 – 19 with AMERs, EMEA and APJ specific sessions to ensure easy access for those joining.
With 2 days of awesome content, you might not make it to every session on your agenda. Fortunately, Veeam makes it easy to replay recorded sessions from the VeeamON website after the conference has finished.