I recently experienced a timeout error while offloading backups to a capacity tier (Azure BLOB). It occurred whenever Veeam offloaded large quantities of backup files simultaneously, typically any more than 6 backup files at a time would result in the offload failing.
This was a problem because the automatic SOBR offload process would process 40+ backup files at a time, most of which would fail until only 6 backup files remained in the queue, at this point the 6 remaining backup files would offload successfully. Typically there would be 250 or so backups in the offload queue, Veeam would offload these backup files for an hour until the timeout error occurred, then Veeam would start the next batch of 40 backups files to be offloaded.
Looking at the Veeam offload job logs (located in the main folder of the Veeam server logs, path ‘C:\ProgramData\Veeam\Backup\SOBR Offload’) we could see the following,
task example [18.08.2019 11:21:23] <176> Info – – – – Response: Archive Backup Chain: b14e8dd9-2351-4236-bd54-a08339859d49_40f33f92-ca5a-45ac-a2ec-d674efd0383d [18.08.2019 12:57:26] <844> Error AP error: WinHttpWriteData: 12002: The operation timed out [18.08.2019 12:57:26] <844> Error –tr:Write task completion error [18.08.2019 12:57:26] <844> Error Shared memory connection was closed
Last December I wrote about the Cloud Tier feature coming in Veeam Backup & Replication (B&R) v9.5 Update 4, specifically the ‘Move Mode’ within Capacity Tier. It’s been one of my most popular writes ups and it still receives quite a lot of traffic even today, so now with the upcoming v10 release bringing more capability to Cloud Tier I thought it would be worth a followup. To clear up any confusion, Cloud Tier is the marketing name while Capacity Tier is the technical name used in the GUI.
Native integration between Veeam and Object Storage has and continues to be one of the most discussed topics across the Veeam community in my opinion. Before B&R v9.5U4 was released, organisations had to rely on third-party solutions to function as gateways to object storage with Veeam jobs tweaked in such a manner to reduce or eliminate any ‘calls’ to backups written to object storage to minimise egress and access fees. Often these solutions didn’t scale well, inefficient and proved cumbersome to manage.
With B&R v9.5U4 came Cloud Tier, a feature that provided native object storage integration within Veeam for Amazon S3, Azure BLOB Storage, IBM Cloud object storage and S3-compatible service providers or on-premises storage supported.
I’ve been fortunate enough to be a member of the Veeam Vanguard program since 2017; an advocacy program run by Veeam consisting of like-minded individuals who are passionate about all things Veeam, many of whom I consider as friends. I always look forward to time spent with the group as the knowledge and experience shared within has always been invaluable to me. Hopefully, I have many more years in the Vanguard program to come, and I urge anyone with a passion for Veeam to apply for the program which is expected to open in late 2019.
One of many excellent perks that come with the program is attending the Vanguard Summit, for the second year in a row the summit was held in Prague. While it takes around 24 hours to travel from my home town of Brisbane to Prague, it’s well worth it. Prague is an amazing location, vastly different in so many ways compared to what this simple Australian is used to back home.
One of the reasons why the Vanguard Summit is held in the Czech Republic is because Veeam’s main Research and Development (R&D) Centre is located in Prague, making it the prime location for getting the Veeam R&D team and Vanguards in the same room. Anton Gostev, Alec King, Dmitry Popov, Pavel Tide, Nikita Skestakov, Oleg Patrakov and Mike Resseler all made appearances and presented on their areas of expertise.
During a recent Veeam ONE deployment I configured Veeam Intelligent Diagnostics (VID), a great feature that was introduced in Veeam ONE v9.5 Update 4. VID allows Veeam ONE to automatically detect known issues in the configuration and performance of Veeam backup infrastructure. It does this by parsing logs from Veeam Backup & Replication servers, analyses the logs against a known list of issue signatures and triggers an alarm with detailed information about what the issue is, and how it can be fixed.
I recently experienced an issue while deploying Veeam ONE, all backup proxy servers were failing to display CPU/Memory statistics with the following error, “Failed to collect performance data for object %servername%. The RPC Server is unavailable. (Exception from HRESULT: 0x800706BA)”.
It’s been a bit quiet on the blog front for the last couple of months because I’ve focussed my attention on a “little” side project which recently reached fruition. This side project was, of course, the VMCE 9.5 Unofficial Study Guide that we released on the 15th of March.
Rose Herden and I started working on the book back in November 2018, the initial framework of the book actually came from a presentation that Rasmus Haslund and I submitted for a VeeamON 2017 session. We had hoped to present but unfortunately, our submission not selected. From the ashes of that presentation, with contributions from Florian Raack plus several peer reviews from fellow Vanguards, VMCT trainers and Veeam employees, a study guide for the VMCE was born. Working closely with Rose on this book has been a fantastic experience, her knowledge and passion around the VMCE is second to none. What Rose has done around developing and assembling this book has been absolutely phenomenal.
Originally, the book was going to cover the basics around studying for the VMCE along with listing resources available such as the unofficial practice exams, write-ups, etc. Once Rose saw the early draft though she suggested we expanded the book by adding module guides, these would include key learning goals/outcomes, key terms, learning suggestions, concept checks and even a practice exam for every module from the VMCE courseware. These module guides quickly became the focus of the book filled with insight, tips and tricks from an experienced VMCT scattered throughout the chapter.
To date, our book has been downloaded over 800 times through our publisher, leanpub.com. We were even fortunate enough to be a featured book on leanpub during the first week of release. At the current rate, there is even chance of reaching over 1000 readers in the coming weeks. It’s a bit of an understatement when I say how this has completely blown us away to see how many readers have downloaded the book, we were aiming for 100-200 readers. More importantly, the feedback received so far from the community has been overwhelmingly positive which is a huge relief.
While the book is available for free, we’ve left the suggested price at $4.99 USD, readers just need to select the $0.00 price during checkout to download for free. Rose and I are very thankful to the readers who have paid for the book with any money raised going towards printing hard copies. We’ve initially planned for just 10 copies to be printed with any money left over to be donated to a charity called TECH GIRLS MOVEMENT.
Archive Tier was announced back at VeeamON 2017 New Orleans alongside a raft of new features scheduled for release with Veeam Backup & Replication v10. Archive Tier would enable Veeam administrators to easily add regular disk-based backup repositories, object-based storage repositories or even tape as an archive extent to a SOBR (Scale-Out Backup Repository) which could then be configured to copy any backup or move sealed backup files from the SOBR across to said archive extent.
The ability to archive backup files to a particular archive extent such as tape or cheaper disk was a great addition, but the significant improvement was the native integration with object storage which has been a highly requested feature for several years now. During VeeamON it was announced that AWS S3, AWS Glacier, Azure BLOB and Swift compatible object storage to be supported.
Copying Veeam backup files to object storage has always been possible through the use of third-party vendor storage gateways, such as the AWS Storage Gateway or Azure StoreSimple but speaking from my own experiences, these tools don’t always deliver what they promise and require additional skills to support.
I was just checking out Poul Preben’s blog and discovered a fix for an issue I encountered during an earlier Veeam deployment. Don’t you love finding answers to those mysterious issues, I certainly do.
The problem arose whenever I tried to add a particular windows server into the Veeam managed backup infrastructure. The server was earmarked to become a Veeam Proxy and Backup Repository. As per best practices we didn’t join this server to the domain and created a dedicated local account on the server for Veeam authentication. Remember if the logins on the machine to-be-backed up and the backup storage are the same, we call that unwanted correlation.
Unfortunately, we ran into the below issue when trying to install the Veeam Deployment Service.
[my.repository.fqdn] Failed to install deployment service.The Network path was not found–tr: Failed to create persistent connection to ADMIN$ shared folder on host [my.repository.fqdn].–tr: Failed to install service [VeeamDeploymentService] was not installed on the host [my.repository.fqdn].
The Veeam binaries are pushed through the ADMIN$ share and it turns out that this share cannot be accessed with a local administrator account by default, due to Remote UAC being enabled. If we had used the local Administrator (SID 500) account however, this issue wouldn’t have occurred.
Poul details the fix on his blog which I’ll link below.
Anton Gostev recently wrote about a bug that will impact a lot of Veeam environments so I thought it would be best if I mentioned it here to help get the word out. Veeam have also created a KB article you can find here detailing this issue.
If your Veeam Backup & Replication console is showing a “Failed to check certificate expiration date” message upon opening the backup console, it means that your default self-signed certificate is about to expire.
A self-signed certificate is an identity certificate that is signed by the same entity whose identity it certifies. Veeam uses certificates to implement secure communications between your backup infrastructure components, as well as with any managed backup agents in your environment.
Now Self-signed certificates are automatically renewed every 12 months by your Veeam Server but due to a bug introduced in v9.5 U3a, the Veeam Backup Service will still have old information about the absolute certificate even after a new self-signed certificate is automatically generated. If you ignore this message, once the self-signed certificates are automatically renewed after 12 months, agent management functionality, as well as all granular restores will start failing.
Typically this will occur 1 year from the certificates creation date so the best course of action is to remedy the situation as soon as you see the error message and before the self-signed certificates expire. The fix is to manually generate a new certificate as described in this Veeam User Guide. Please note that this process will automatically restart the Veeam Backup Service so it’s is recommended to ensure no active jobs are running.
Worth mentioning, Veeam administrators can select or import their own certificate but most organisations are still using self-signed SSL Certificates which are generated when Veeam Backup & Replication is installed.
I recently had the opportunity to visit Prague courtesy of the Veeam Vanguard program, this is my second year being a member of this fantastic community which is arguably one of the best evangelism/advocacy programs run by any vendor out there. While it was a long journey to get to Prague it was well worth it, to not only catch up with the other Vanguards but to get access to Veeam’s Product Strategy team, R&D personnel and Product Managers for in-depth discussions of everything Veeam related.
The summit consisted of two and a half days of sessions that included content filled to the brim with Veeam goodies ranging from upcoming updates to entirely new products that were still very early in their development cycle (kudos to Veeam for sharing). Veeam certainly was not holding back as questions raised from fellow Vanguards were answered honestly and truthfully, nothing was off the table including any questions about v10. All of this provided an insightful glimpse into the inner workings of the Veeam team and further cemented the value I place in the Vanguard program.
The real golden nuggets of information were found whenever we delved into the reasoning behind how and why certain features and capabilities were developed. For example, session speakers might detail the limitations of a particular feature and how they have worked to address them even if it might mean investing more time than anticipated in developing the feature. Yes, it’s a difficult decision to make but Veeam isn’t in the business of making half-baked software and it certainly shows in just how reliable their software has been to date.