Veeam NAS Sizing – Cache Repositories and Metadata

For those diving into NAS sizing, I wanted to share this write up to help explain the roles of the Cache Repositories and Metadata. The sizing of these components varies depending on the type of backup storage you’re using. This consideration has become increasingly important with the addition of object storage support in v12.

A Cache Repository is a storage location where Veeam Backup & Replication keeps temporary cached metadata (folder level hashes) for the NAS data being protected. A Cache Repository improves incremental backup performance because it enables Veeam to quickly identify source folders that don’t have any changes via matching hashes stored in the Cache Repository.

When NAS backup is targeting a disk-based repository, the cache metadata is held in memory so very little if any storage on the Cache Repository is required, roughly 1 to 2 GB for disk-based repositories.

Not to be confused with cached metadata (folder level hashes) on the Cache Repository, Metadata is created by Veeam used to describe the backup files, source, files name version and pointers to backup blobs. Most often, when performing restore, merge, transform operations, Veeam interacts with this Metadata rather than with the backup data.

Metadata is always redundant (“meta” and “metabackup” or “metacopyv2”). The actual placement and number of metadata copies is dependent on the repository configuration and type. This is important to understand because the number and placement of Metadata will impact how much storage is required.

Continue reading

VBR v12.1 – Malware Detection Methods

In this blog, I’ll be exploring the new security features that are included in the latest version of Veeam Backup & Replication v12.1, this includes Inline Entropy Analysis, File Index Analysis, and YARA Scanning.

Veeam Backup & Replication v12.1 – Malware Detection

Inline Entropy Analysis
Analyses each source disk block on the fly using an AI/ML-trained model. The scan occurs during every backup run, providing real-time insights into potential anomalies or threats at the block level. Veeam looks for ransomware notes, onion links and data that has recently become encrypted without needing additional software.

Inline analysis is disabled by default, given it’s potential resource consumption so when planning to enable this feature be sure to check if your backup proxies have spare CPU resources, plan for 10-15% additional CPU load per proxy. After enabling, during the first backup run, a full disk scan is performed to create a baseline (not a full backup). It’s possible to exclude machines to reduce the impact during this intial scan using Malware Exclusions.

The sensitivity for inline entropy analysis can be adjusted, it’s recommended to use low sensitivity for environments with heavy encryption usage.

Let’s dive deeper and have a look at a how Veeam inline entropy scanning works once it’s enabled,

Continue reading

Understanding Immutability Periods for GFS Backups

Determining immutability periods when working with Grandfather, Father, Son (GFS) backups can be a bit tricky considering GFS immutability periods can be determined by either

a) the GFS retention period

b) the Backup Repositories immutability period.

Fortunately, Fabian (@Mildur) from the Veeam R&D forums shared valuable insights that simplify this process. The full discussion can be found here.

The key takeaways from the forum discussion are as follows:

  1. Standalone Repositories: In the case of standalone repositories, data remains immutable throughout the GFS retention period. This means the backup data is secure and unchangeable throughout the entire GFS retention timeline.
  2. Performance Tier without Capacity Tier: When using the Performance Tier without the Capacity Tier, data immutability holds for the complete GFS retention period.
  3. Performance Tier with Move Policy Disabled: Similar to the previous scenario, if the ‘Move Policy’ is disabled within the capacity tier, the data will be immutable for the entire GFS retention period.
  4. Performance Tier with Move Policy Enabled: When the Move Policy is enabled within the Capacity Tier, unlike the previous example, immutability is applied as per the repository’s immutable retention period.
  5. On Capacity Tier: For backup data stored on capacity tier, the immutability aligns with the repository’s settings.
  6. On Archive Tier: Within the Archive Tier, data immutability is for the entire GFS retention period.

An essential note from the forum post highlights that if the GFS retention period is shorter than the Repository immutability period, the Repository immutability period becomes the minimum for all backup files. In other words, whichever is longer out of the two will be the immutability period.

To simplify this further, check out this fabulous table created by a fellow Veeam colleague, John Suh.

Veeam ONE – Business Categorisation – Stretched Clusters

The business categorisation feature available in Veeam ONE offers a solution for managing and organising virtual machines (VMs) within virtualised infrastructures, such as stretched cluster environments. By leveraging the underlying ESXi host information, this feature allows IT administrators to create a seamless and dynamic approach to VM management, ensuring optimal backup and recovery strategies.

In a stretched cluster scenario, where VMs can move between different data centres for improved availability and disaster recovery capabilities, the ability to group VMs based on their current location can be quite important. For example, if there is a need to ensure transient VMs are protected to a local backup repository.

Leveraging Veeam ONE’s business categorisation means we can tag VMs according to the specific ESXi host they are running on. Tags can then be configured as the source for a Veeam backup job.

Step 1. Create the category,

Step 2. Pick ‘Grouping expression’ as the Categorisation method.

Continue reading

Network Mapping vs Re-IP

I wanted to share this here as a handy reminder of how network mapping and Re-IP works.

For Microsoft VMs, Veeam Backup & Replication also automates the reconfiguration of VM IP addresses. If the IP addressing scheme in the production site differs from the DR site scheme, re-IP rules can be created for the replication job. When a failover occurs, Veeam Backup & Replication checks if any of the specified re-IP rules apply to the replica. If a rule applies, Veeam Backup & Replication mounts the VM disks of the replica to the backup server and changes its IP address configuration via the Microsoft Windows registry, all in less than a second. If the failover is undone or if you fail back to the original location, the replica IP address is changed back to its pre-failover state.

The replication process of a VM typically uses the same network configuration as the original VM, but if the DR site has a different network setup, a network mapping table can be created for the replication job. This table maps the source networks to the target networks. During each replication job run, Veeam Backup & Replication checks the original VM’s network configuration against the mapping table. If the original VM network matches a source network in the table, Veeam Backup & Replication updates the replica configuration file to replace the source network with the target one. This ensures that the VM replica always has the correct network settings required by the DR site. In the event of a failover to the VM replica, it will be connected to the appropriate network.

This information has been copied from the Veeam Forums here: https://forums.veeam.com/veeam-backup-replication-f2/dr-site-difference-between-network-mapping-and-re-ip-t45256.html

vSphere tags are unavailable / QueryAllCategoryInfos failed

If you’re seeing “tag is unavailable” errors when attempting to protect a VM using vSphere tags or “QueryAllCategoryInfos failed. There was no endpoint listening at https://vcenteraddress/lookupservice/sdk that could accept the message” error when editing the backup job.

tag is unavailable
QueryAllCategoryInfos failed
Continue reading

Failed to load content of the folder

Working with a customer who was troubleshooting the following error, ‘Failed to load content of the folder’ whenever attempting a file-level restore within Veeam Backup & Replication.

Failed to load content of the folder

After investigating, we found the time configured on the windows OS running the backup repository role did not match the backup server, it was out by 5 minutes. After setting the correct time, the file restore performed as expected.

Manually remove the Veeam CDP VAIO filter driver from an ESXi host

I’ve been troubleshooting a CDP issue in my home lab so I wanted to reinstall the I/O filter drivers. Attempting to remove the filter drivers using the VBR console was failing to place the ESXi host into maintenance mode since the VBR VM was running on that host and I had nowhere to vMotion it to (I used nested virtualisation with a single physical ESXi server).

I also wasn’t having much luck uninstalling the Veeam CDP VAIO filter driver manually as per the instructions in https://www.veeam.com/kb4151 but fortunately, there is another method to remove them.

  • 1. Place ESXi host into maintenance mode
  • 2. SSH to the ESXi host
  • 3. Verify ‘veecdp’ filter driver exists with command, esxcli software vib list
Continue reading

Failed to connect to the server. Specified user is invalid or does not have enough permissions on the server

I’ve recently had a couple of issues when adding a standalone VBR (Veeam Backup & Replication) server to VDRO (Veeam Disaster Recovery Orchestrator). This is a quick write-up to cover the basic troubleshooting steps performed and how the problems were resolved.

The error thrown in VDRO is as follows; “Failed to connect to the server. Specified user is invalid or does not have enough permissions on the server.”

Continue reading