Part 1 – Veeam Backup & Replication v10 to PureStorage FlashBlade
In this blog, we’ll be configuring an NFS share on a PureStorage FlashBlade which will be utilised by Veeam Backup & Replication v10 as an NFS backup repository.
Veeam Backup & Replication has supported backups directly from NFS (Direct NFS Access) and restores directly to NFS (Data Restore in Direct NFS Access Mode) natively for a while now but backing up to an NFS share was always a bit of a challenge. Limitations around mounting an NFS share on Windows meant organisations were required to deploy workarounds that required Linux servers and NFS mount points which often ended up in the ‘too hard bucket’ for administrators who preferred the ease of Windows and SMB.
The great news is Veeam Backup & Replication v10 can now natively leverage an NFS share directly without any Linux machines acting as a middleman. In typical Veeam fashion, it’s a simple wizard-driven process to setup-add the NFS share just like any other backup repository types supported by Veeam.
We’ll be using a PureStorage FlashBlade as the underlying storage for the NFS share in this guide. FlashBlades are great targets for Veeam for a couple of reasons, they provide high performance in a dense form-factor, they support multiple protocols such as NFS, SMB and Object Store in parallel and scaling out is a simple case of adding another blade.
New features in Veeam v10 such as the Multi-VM Instant recovery or data APIs are increasing the storage IOPS demanded which some legacy backup storage are failing to deliver. The FlashBlade being an all-flash storage platform is designed to handle the random I/O traffic that can be generated from large Multi-VM Instant Recovery sessions (aka restore bootstorms).
With the addition of ransomware-proofing backups with PureStorage SafeMode which will be discussed in a later article in this series, it’s pretty easy to see why these devices make great Veeam backup targets.
Configuring the FlashBlade NFS Share
Let’s get started – After logging into our FlashBlade management interface, we are Lets get started – After logging into our FlashBlade management interface, we are greeted with the typical Pure Storage interface, for those familiar with FlashArrays you’ll notice the interface is identical.
VeeamON 2020 was scheduled to take place in Las Vegas, Nevada, USA from May 4th till May 6th, 2020. Due to current events happening around the world it’s been moved into the virtual space, VeeamON 2020 is going online!!! With two days of live collaboration and interactive experiences starting tomorrow (Wednesday, June 17) at 9AM Pacific Daylight Time (PDT). Fortunately, Veeam is running with several different timezones in mind so you can build out your agenda and plan your experience by consulting the schedule by time zone. Check out more information here: https://www.veeam.com/veeamon/agenda
Even though it’s going digital, VeeamON is still going to be the best place to go to learn about all things Veeam from a variety of industry experts, the full list of guest speakers can be checked out here: https://www.veeam.com/veeamon/speakers. There are quite a few Veeam Vanguards presenting so I recommend checking them out as well.
Even with a virtual conference, Veeam is making VeeamON 2020 as engaging and interactive for everyone as much as possible. There will be access to live elements from breakout sessions to expert Q&As, demo sessions, the first Techfest —VeeamathON — a virtual Expo Hall.
Wrapping up, this is a fantastic opportunity to learn something more about Veeam from the comfort of your office/home for free. I for one cannot wait to see what Veeam have in store for the next 12 months with new features and enhancements.
As described by Andreas Neufert from Veeam, Virtual Disk Development Kit (VDDK) is provided by VMware which is leveraged by Veeam to perform backups and other functions. In this instance, there appears to be a bug introduced in VDDK after VMware introduced “a faster way to process data over Network NBD mode (async processing)” which is causing certain VMs to fail during backup jobs.
The recommendation from Andreas is to ensure the underlying ESXi hosts and vCenter are updated to the latest builds from VMware or Veeam support can add a registry key on the VBR server which disables the new VMware processing.
Veeam will always recommend calling support to obtain the reg key, this is to ensure you are applying the registry tweak for the right reason. However, if you are confident that you are affected by this issue, the details for the reg tweak have been provided below.
As with most IT enthusiasts, I use a homelab for tinkering, troubleshooting and hopefully a bit of learning. My homelab compute consists of a single server with an i7-3770 CPU / 32GB DDR3 RAM / 1TB SSD connected to an Intel BOXDQ77MK motherboard, basically a glorified PC. While my homelab can’t run dozens of virtual machines concurrently, it’s power-efficient, cheap and importantly quiet.
I built the server back in 2013 and to date, the server has been rock solid. At one point the server even hosted this very blog for over 3 years. The version of ESXi is a bit old and overdue for an upgrade so I thought this would be a good opportunity to document the process for anyone else interested in the ESXi upgrade process. Specifically for my case, I’ll be upgrading from VMware ESXi 6.5.0 (build 5969303) to VMware ESXi 6.7 (build 15160138).
I recently experienced a timeout error while offloading backups to a capacity tier (Azure BLOB). It occurred whenever Veeam offloaded large quantities of backup files simultaneously, typically any more than 6 backup files at a time would result in the offload failing.
This was a problem because the automatic SOBR offload process would process 40+ backup files at a time, most of which would fail until only 6 backup files remained in the queue, at this point the 6 remaining backup files would offload successfully. Typically there would be 250 or so backups in the offload queue, Veeam would offload these backup files for an hour until the timeout error occurred, then Veeam would start the next batch of 40 backups files to be offloaded.
Looking at the Veeam offload job logs (located in the main folder of the Veeam server logs, path ‘C:\ProgramData\Veeam\Backup\SOBR Offload’) we could see the following,
task example [18.08.2019 11:21:23] <176> Info – – – – Response: Archive Backup Chain: b14e8dd9-2351-4236-bd54-a08339859d49_40f33f92-ca5a-45ac-a2ec-d674efd0383d [18.08.2019 12:57:26] <844> Error AP error: WinHttpWriteData: 12002: The operation timed out [18.08.2019 12:57:26] <844> Error –tr:Write task completion error [18.08.2019 12:57:26] <844> Error Shared memory connection was closed
Last December I wrote about the Cloud Tier feature coming in Veeam Backup & Replication (B&R) v9.5 Update 4, specifically the ‘Move Mode’ within Capacity Tier. It’s been one of my most popular writes ups and it still receives quite a lot of traffic even today, so now with the upcoming v10 release bringing more capability to Cloud Tier I thought it would be worth a followup. To clear up any confusion, Cloud Tier is the marketing name while Capacity Tier is the technical name used in the GUI.
Native integration between Veeam and Object Storage has and continues to be one of the most discussed topics across the Veeam community in my opinion. Before B&R v9.5U4 was released, organisations had to rely on third-party solutions to function as gateways to object storage with Veeam jobs tweaked in such a manner to reduce or eliminate any ‘calls’ to backups written to object storage to minimise egress and access fees. Often these solutions didn’t scale well, inefficient and proved cumbersome to manage.
With B&R v9.5U4 came Cloud Tier, a feature that provided native object storage integration within Veeam for Amazon S3, Azure BLOB Storage, IBM Cloud object storage and S3-compatible service providers or on-premises storage supported.
I’ve been fortunate enough to be a member of the Veeam Vanguard program since 2017; an advocacy program run by Veeam consisting of like-minded individuals who are passionate about all things Veeam, many of whom I consider as friends. I always look forward to time spent with the group as the knowledge and experience shared within has always been invaluable to me. Hopefully, I have many more years in the Vanguard program to come, and I urge anyone with a passion for Veeam to apply for the program which is expected to open in late 2019.
One of many excellent perks that come with the program is attending the Vanguard Summit, for the second year in a row the summit was held in Prague. While it takes around 24 hours to travel from my home town of Brisbane to Prague, it’s well worth it. Prague is an amazing location, vastly different in so many ways compared to what this simple Australian is used to back home.
One of the reasons why the Vanguard Summit is held in the Czech Republic is because Veeam’s main Research and Development (R&D) Centre is located in Prague, making it the prime location for getting the Veeam R&D team and Vanguards in the same room. Anton Gostev, Alec King, Dmitry Popov, Pavel Tide, Nikita Skestakov, Oleg Patrakov and Mike Resseler all made appearances and presented on their areas of expertise.
During a recent Veeam ONE deployment I configured Veeam Intelligent Diagnostics (VID), a great feature that was introduced in Veeam ONE v9.5 Update 4. VID allows Veeam ONE to automatically detect known issues in the configuration and performance of Veeam backup infrastructure. It does this by parsing logs from Veeam Backup & Replication servers, analyses the logs against a known list of issue signatures and triggers an alarm with detailed information about what the issue is, and how it can be fixed.
I recently experienced an issue while deploying Veeam ONE, all backup proxy servers were failing to display CPU/Memory statistics with the following error, “Failed to collect performance data for object %servername%. The RPC Server is unavailable. (Exception from HRESULT: 0x800706BA)”.