I recently had the opportunity to migrate a small regional office from Hyper-V to VMware by leveraging Instant VM Recovery of Workloads to VMware vSphere VMs available in Veeam Backup & Replication v10. With this latest Instant VM Recovery, Veeam can instantly recover any Veeam backup created by any Veeam product to a VMware vSphere VM. In other words, physical servers, workstations, virtual machines or cloud instance Veeam backups can now be restored to VMware. Veeam even handles the P2V/V2V conversion automatically with it’s own built-in logic. How good is that!
To get started, a backup was taken of the source Hyper-V VMs prior to the scheduled outage window. During the outage window, the source Hyper-V VMs being migrated are powered down and the Veeam backup job is run again to ensure all changes to the VM disks have been backed up. Once the backup has completed, the source VM should not be powered on again.
To start the migration process, browse to the backups, right-click the VM to be migrated and click ‘Instant VM Recovery’ then ‘VMware…’
At the next screen, ensure the selected restore point matches the time for when the VM was shut down and the last backup ran.
While attempting to instantly recover a Hyper-V VM to an ESXi host, Veeam encountered the following error, ‘failed to publish VM %name% Error: One or more errors occurred’.
Some Veeam users were discussing the same symptoms when ‘Server for NFS’ feature is enabled on the Veeam forums. In this particular case, the VBR/repository role is configured on a StoreEasy 1460 which runs Windows Storage Server 2016 standard and ‘Server for NFS’ feature was indeed enabled.
This is the second part of a three-part blog series on Veeam and Pure Storage FlashBlade. In the previous blog post, we configured a Network File System (NFS) share on a Pure Storage FlashBlade as a Veeam backup repository. In this blog post, we will be focusing on configuring SafeMode snapshots to harden the backup files that are residing on the FlashBlade.
Ransomware attacks continue to rise, with constantly evolving sophistication and complexity. A key part of ransomware resilience strategy is backing up data on a regular basis and implementing a strong line of defence against threats targeting the backup data. Adopting industry standards for data protection such as 3-2-1 rule, offline backups and immutable backup storage are effective techniques to protect backup data sets against malicious attacks. Now let’s discuss how to make your FlashBlade system an immutable backup storage target with SafeMode snapshots.
A storage snapshot is a point-in-time, image-level view of data that are impervious to ransomware. This immutability makes them an ideal layer of defense against ransomware. The problem with storage snapshots is they can still be removed by rouge admins or attackers if they gain access to the storage array management. In the case of a Pure Storage system, the deleted snapshots are temporarily stored in a ‘destroyed state’ that is similar to a recycle bin. If these snapshots are not recovered in a timely manner, they will be auto-eradicated and can even be manually destroyed prior to the auto-eradication.
The SafeMode snapshots on the other hand, cannot be deleted, modified, or encrypted either accidentally or intentionally. This prevents the manual and complete eradication (permanent deletion) of data backups that are stored within the FlashBlade. Due to their immutability, the SafeMode snapshots serve as an additional mitigation mechanism against ransomware attacks or rogue administrators.
In this blog, we’ll be configuring an NFS share on a Pure Storage FlashBlade which will be utilised by Veeam Backup & Replication v10 as an NFS backup repository.
Veeam Backup & Replication has supported backups directly from NFS (Direct NFS Access) and restores directly to NFS (Data Restore in Direct NFS Access Mode) natively for a while now but backing up to an NFS share was always a bit of a challenge. Limitations around mounting an NFS share on Windows meant organisations were required to deploy workarounds that required Linux servers and NFS mount points which often ended up in the ‘too hard bucket’ for administrators who preferred the ease of Windows and SMB.
The great news is Veeam Backup & Replication v10 can now natively leverage an NFS share directly without any Linux machines acting as a middleman. In typical Veeam fashion, it’s a simple wizard-driven process to add the NFS share just like any other backup repository types supported by Veeam.
We’ll be using a Pure Storage FlashBlade as the underlying storage for the NFS share in this guide. FlashBlades are great targets for Veeam for a couple of reasons, they provide high performance in a dense form-factor, they support multiple protocols such as NFS, SMB and Object Store in parallel and scaling out is a simple case of adding another blade.
New features in Veeam v10 such as the Multi-VM Instant recovery or data APIs are increasing the storage IOPS demanded which some legacy backup storage are failing to deliver. The FlashBlade being an all-flash storage platform is designed to handle the random I/O traffic that can be generated from large Multi-VM Instant Recovery sessions (aka restore bootstorms).
With the addition of ransomware-proofing backups with PureStorage SafeMode which will be discussed in a later article in this series, it’s pretty easy to see why these devices make great Veeam backup targets.
Configuring the FlashBlade NFS Share
Let’s get started – After logging into our FlashBlade management interface, we are Lets get started – After logging into our FlashBlade management interface, we are greeted with the typical Pure Storage interface, for those familiar with FlashArrays you’ll notice the interface is identical.
VeeamON 2020 was scheduled to take place in Las Vegas, Nevada, USA from May 4th till May 6th, 2020. Due to current events happening around the world it’s been moved into the virtual space, VeeamON 2020 is going online!!! With two days of live collaboration and interactive experiences starting tomorrow (Wednesday, June 17) at 9AM Pacific Daylight Time (PDT). Fortunately, Veeam is running with several different timezones in mind so you can build out your agenda and plan your experience by consulting the schedule by time zone. Check out more information here: https://www.veeam.com/veeamon/agenda
Even though it’s going digital, VeeamON is still going to be the best place to go to learn about all things Veeam from a variety of industry experts, the full list of guest speakers can be checked out here: https://www.veeam.com/veeamon/speakers. There are quite a few Veeam Vanguards presenting so I recommend checking them out as well.
Even with a virtual conference, Veeam is making VeeamON 2020 as engaging and interactive for everyone as much as possible. There will be access to live elements from breakout sessions to expert Q&As, demo sessions, the first Techfest —VeeamathON — a virtual Expo Hall.
Wrapping up, this is a fantastic opportunity to learn something more about Veeam from the comfort of your office/home for free. I for one cannot wait to see what Veeam have in store for the next 12 months with new features and enhancements.
As described by Andreas Neufert from Veeam, Virtual Disk Development Kit (VDDK) is provided by VMware which is leveraged by Veeam to perform backups and other functions. In this instance, there appears to be a bug introduced in VDDK after VMware introduced “a faster way to process data over Network NBD mode (async processing)” which is causing certain VMs to fail during backup jobs.
The recommendation from Andreas is to ensure the underlying ESXi hosts and vCenter are updated to the latest builds from VMware or Veeam support can add a registry key on the VBR server which disables the new VMware processing.
Veeam will always recommend calling support to obtain the reg key, this is to ensure you are applying the registry tweak for the right reason. However, if you are confident that you are affected by this issue, the details for the reg tweak have been provided below.
As with most IT enthusiasts, I use a homelab for tinkering, troubleshooting and hopefully a bit of learning. My homelab compute consists of a single server with an i7-3770 CPU / 32GB DDR3 RAM / 1TB SSD connected to an Intel BOXDQ77MK motherboard, basically a glorified PC. While my homelab can’t run dozens of virtual machines concurrently, it’s power-efficient, cheap and importantly quiet.
I built the server back in 2013 and to date, the server has been rock solid. At one point the server even hosted this very blog for over 3 years. The version of ESXi is a bit old and overdue for an upgrade so I thought this would be a good opportunity to document the process for anyone else interested in the ESXi upgrade process. Specifically for my case, I’ll be upgrading from VMware ESXi 6.5.0 (build 5969303) to VMware ESXi 6.7 (build 15160138).
I recently experienced a timeout error while offloading backups to a capacity tier (Azure BLOB). It occurred whenever Veeam offloaded large quantities of backup files simultaneously, typically any more than 6 backup files at a time would result in the offload failing.
This was a problem because the automatic SOBR offload process would process 40+ backup files at a time, most of which would fail until only 6 backup files remained in the queue, at this point the 6 remaining backup files would offload successfully. Typically there would be 250 or so backups in the offload queue, Veeam would offload these backup files for an hour until the timeout error occurred, then Veeam would start the next batch of 40 backups files to be offloaded.
Looking at the Veeam offload job logs (located in the main folder of the Veeam server logs, path ‘C:\ProgramData\Veeam\Backup\SOBR Offload’) we could see the following,
task example [18.08.2019 11:21:23] <176> Info – – – – Response: Archive Backup Chain: b14e8dd9-2351-4236-bd54-a08339859d49_40f33f92-ca5a-45ac-a2ec-d674efd0383d [18.08.2019 12:57:26] <844> Error AP error: WinHttpWriteData: 12002: The operation timed out [18.08.2019 12:57:26] <844> Error –tr:Write task completion error [18.08.2019 12:57:26] <844> Error Shared memory connection was closed
Last December I wrote about the Cloud Tier feature coming in Veeam Backup & Replication (B&R) v9.5 Update 4, specifically the ‘Move Mode’ within Capacity Tier. It’s been one of my most popular writes ups and it still receives quite a lot of traffic even today, so now with the upcoming v10 release bringing more capability to Cloud Tier I thought it would be worth a followup. To clear up any confusion, Cloud Tier is the marketing name while Capacity Tier is the technical name used in the GUI.
Native integration between Veeam and Object Storage has and continues to be one of the most discussed topics across the Veeam community in my opinion. Before B&R v9.5U4 was released, organisations had to rely on third-party solutions to function as gateways to object storage with Veeam jobs tweaked in such a manner to reduce or eliminate any ‘calls’ to backups written to object storage to minimise egress and access fees. Often these solutions didn’t scale well, inefficient and proved cumbersome to manage.
With B&R v9.5U4 came Cloud Tier, a feature that provided native object storage integration within Veeam for Amazon S3, Azure BLOB Storage, IBM Cloud object storage and S3-compatible service providers or on-premises storage supported.