Determining immutability periods when working with Grandfather, Father, Son (GFS) backups can be a bit tricky considering GFS immutability periods can be determined by either
a) the GFS retention period
b) the Backup Repositories immutability period.
Fortunately, Fabian (@Mildur) from the Veeam R&D forums shared valuable insights that simplify this process. The full discussion can be found here.
The key takeaways from the forum discussion are as follows:
Standalone Repositories: In the case of standalone repositories, data remains immutable throughout the GFS retention period. This means the backup data is secure and unchangeable throughout the entire GFS retention timeline.
Performance Tier without Capacity Tier: When using the Performance Tier without the Capacity Tier, data immutability holds for the complete GFS retention period.
Performance Tier with Move Policy Disabled: Similar to the previous scenario, if the ‘Move Policy’ is disabled within the capacity tier, the data will be immutable for the entire GFS retention period.
Performance Tier with Move Policy Enabled: When the Move Policy is enabled within the Capacity Tier, unlike the previous example, immutability is applied as per the repository’s immutable retention period.
On Capacity Tier: For backup data stored on capacity tier, the immutability aligns with the repository’s settings.
On Archive Tier: Within the Archive Tier, data immutability is for the entire GFS retention period.
An essential note from the forum post highlights that if the GFS retention period is shorter than the Repository immutability period, the Repository immutability period becomes the minimum for all backup files. In other words, whichever is longer out of the two will be the immutability period.
To simplify this further, check out this fabulous table created by a fellow Veeam colleague, John Suh.
The business categorisation feature available in Veeam ONE offers a solution for managing and organising virtual machines (VMs) within virtualised infrastructures, such as stretched cluster environments. By leveraging the underlying ESXi host information, this feature allows IT administrators to create a seamless and dynamic approach to VM management, ensuring optimal backup and recovery strategies.
In a stretched cluster scenario, where VMs can move between different data centres for improved availability and disaster recovery capabilities, the ability to group VMs based on their current location can be quite important. For example, if there is a need to ensure transient VMs are protected to a local backup repository.
Leveraging Veeam ONE’s business categorisation means we can tag VMs according to the specific ESXi host they are running on. Tags can then be configured as the source for a Veeam backup job.
Step 1. Create the category,
Step 2. Pick ‘Grouping expression’ as the Categorisation method.
Whenever using Veeam Backup for Azure, there are two popular options for creating multiple copies of the backup data. First is a backup copy job from an external repository. The second option is to rely on the native storage redundancy provided by Azure, such as geo-redundant storage (GRS) or read-access geo-redundant storage (RA-GRS). The purpose of this article is to explore this second option by leveraging RA-GRS to achieve backup redundancy across two Azure regions.
As a quick reminder, with RA-GRS and GRS, the data is stored in two separate locations, with one being the primary location and the other being a secondary location. The secondary location is located in a different geographic region than the primary location, ensuring that data is safe even in the event of a regional outage or natural disaster.
It’s worth mentioning that while GRS helps ensure that backup data is safe even in the event of a regional outage or natural disaster there are some potential disadvantages with it. For starters, I don’t consider backups stored in GRS as a true ‘independent’ second copy of your backup because GRS replicates blobs with corrupted data inside them. This means any corrupted backups would also be replicated in the second region.
Secondly, we must consider that GRS uses asynchronous replication to the target region. An unplanned failover could result in blobs not being replicated at the target which would likely lead to restore or backup failures. While the possibility of this happening is quite slim, it can’t be ruled out.
For testing, An RA-GRS storage account was provisioned in Australia East from which Azure automatically configured the secondary region to Australia Southeast. This paired secondary region is determined based on the primary region and can’t be changed. Access to the storage account is via public endpoints.
I wanted to share this here as a handy reminder of how network mapping and Re-IP works.
For Microsoft VMs, Veeam Backup & Replication also automates the reconfiguration of VM IP addresses. If the IP addressing scheme in the production site differs from the DR site scheme, re-IP rules can be created for the replication job. When a failover occurs, Veeam Backup & Replication checks if any of the specified re-IP rules apply to the replica. If a rule applies, Veeam Backup & Replication mounts the VM disks of the replica to the backup server and changes its IP address configuration via the Microsoft Windows registry, all in less than a second. If the failover is undone or if you fail back to the original location, the replica IP address is changed back to its pre-failover state.
The replication process of a VM typically uses the same network configuration as the original VM, but if the DR site has a different network setup, a network mapping table can be created for the replication job. This table maps the source networks to the target networks. During each replication job run, Veeam Backup & Replication checks the original VM’s network configuration against the mapping table. If the original VM network matches a source network in the table, Veeam Backup & Replication updates the replica configuration file to replace the source network with the target one. This ensures that the VM replica always has the correct network settings required by the DR site. In the event of a failover to the VM replica, it will be connected to the appropriate network.
If you’re seeing “tag is unavailable” errors when attempting to protect a VM using vSphere tags or “QueryAllCategoryInfos failed. There was no endpoint listening at https://vcenteraddress/lookupservice/sdk that could accept the message” error when editing the backup job.
Working with a customer who was troubleshooting the following error, ‘Failed to load content of the folder’ whenever attempting a file-level restore within Veeam Backup & Replication.
After investigating, we found the time configured on the windows OS running the backup repository role did not match the backup server, it was out by 5 minutes. After setting the correct time, the file restore performed as expected.
I’ve been troubleshooting a CDP issue in my home lab so I wanted to reinstall the I/O filter drivers. Attempting to remove the filter drivers using the VBR console was failing to place the ESXi host into maintenance mode since the VBR VM was running on that host and I had nowhere to vMotion it to (I used nested virtualisation with a single physical ESXi server).
I also wasn’t having much luck uninstalling the Veeam CDP VAIO filter driver manually as per the instructions in https://www.veeam.com/kb4151 but fortunately, there is another method to remove them.
1. Place ESXi host into maintenance mode
2. SSH to the ESXi host
3. Verify ‘veecdp’ filter driver exists with command, esxcli software vib list
I’ve recently had a couple of issues when adding a standalone VBR (Veeam Backup & Replication) server to VDRO (Veeam Disaster Recovery Orchestrator). This is a quick write-up to cover the basic troubleshooting steps performed and how the problems were resolved.
The error thrown in VDRO is as follows; “Failed to connect to the server. Specified user is invalid or does not have enough permissions on the server.”
Tags are a great way to manage and organise resources across a vSphere environment, with tags we can sort and group VMs together based on any criteria we wish. Veeam can even leverage these “groups of VMs” in many different ways.
For example, Veeam Disaster Recovery Orchestrator leverages tags for orchestration plans and to make sure VMs are restored to the right place with tag-based recovery locations.
Veeam ONE Business View can automatically create and assign tags to VMs based on any desired criteria with its categorization engine, this is on top of VeeamONEs capability to run reports based on tags. I’ve previously written about Veeam ONE Business View which can be found here.
Veeam Backup & Replication (VBR) also can utilise tags for any job. By simply adding tags as the source object to the job, VBR will protect any VMs with that same tag, this can be especially time-saving if VMs are frequently being provisioned. Another benefit of tags is that, unlike folders and resource pools, VMs can be assigned multiple tags.
Let’s take a look at how we could group a selection of 8 VMs into various different backup jobs using tags.
I was recently involved with a Veeam deployment that was running into problems during testing, their only performance tier had run out of space. Though this wasn’t unexpected as the disk provisioned was undersized and just temporary until testing was finished, it was preventing new backups from finishing successfully.
The full performance tier belonged to a Scale-Out Backup Repository that was also configured with a capacity tier (copy + move mode) backed by an immutable AWS S3 bucket. Worth mentioning those backup files in the capacity tier were still within the immutability retention period.
According to the user guide“If you use the scale-out backup repository, keep in mind that the Delete from disk operation will remove the backups not only from the performance tier but also from the capacity and archive tier. If you want to remove backups from the performance tier only, you should move those backups to the capacity tier instead. For details, see Moving to Capacity Tier.”
Attempting to perform a “Delete from disk” operation was failing with the error “Error: Unable to delete backup in the Capacity Tier because it is immutable”.